Designing an AI-Powered Metadata Tagging Tool for SolarSPELL Content Curation

AI-assisted tool to support efficient and accurate metadata tagging across SolarSPELL’s digital libraries.

Jul'25 - Present

SolarSPELL

AI-Powered Tagging, UX for AI Tools, Content Classification

AI-Powered Tagging,

Content Classification

AI-Powered Tagging, Content Classification

My role & Team

As the UI/UX Designer on this project, I led end-to-end experience design of the AI-assisted metadata tagging workflow for SPELL-CC at SolarSPELL.

Collaborated with

1 x Product Manager, 1 x Developer, 1 x Communications Specialist

Given our small team and limited resources, we had to make intentional trade-offs—focusing on high-impact features and letting go of certain nice-to-haves to ensure a scalable and user-focused solution.

Background

SPELL-CC (SolarSPELL Content Curation) is an internal tool used by volunteers, interns, and librarians to upload, tag, and organize educational resources for the SolarSPELL digital library.

Each uploaded resource must be tagged with detailed metadata like subject, grade level, language, region, and copyright status to ensure it's easy to find and relevant to the offline communities it's intended for.

Problem Space
Understanding the Tagging Experience in SPELL-CC

Though SPELL-CC already allowed users to upload and tag resources, the process was entirely manual and deeply reliant on each user's interpretation of what “good metadata” looked like.

Manual metadata tagging was slow, inconsistent, and unscalable.

Our challenge was:

How do we embed an AI-driven tagging engine into an existing manual system without breaking trust, over-automating, or losing context?

Our challenge was:

How do we embed an AI-driven tagging engine into an existing manual system without breaking trust, over-automating, or losing context?

Our challenge was:

How do we embed an AI-driven tagging engine into an existing manual system without breaking trust, over-automating, or losing context?

Discovery

Understanding the Metadata Workflow in SPELL-CC

While reviewing how users interacted with SPELL-CC, we saw that tagging varied by user and their understanding of metadata.

Even with training, there was no guarantee that one person’s interpretation of “Secondary School” or “Public Health” matched another’s.

  • Some users skipped tags they weren’t sure about, leaving fields blank.

  • Others used similar but inconsistent tags like “Storybook” vs. “storybook” or “PDF” vs. “Pdf.”

  • Rights-related fields were often neglected because users didn’t know the license type or hadn’t contacted the rights holder.

  • Users from different regions tagged the same subject in different ways, which made cross-region search unreliable.

This meant the quality of metadata, a crucial part of how SolarSPELL libraries are organized and searched depended entirely on who was tagging that day.

This meant the quality of metadata, a crucial part of how SolarSPELL libraries are organized and searched depended entirely on who was tagging that day.

This meant the quality of metadata, a crucial part of how SolarSPELL libraries are organized and searched depended entirely on who was tagging that day.

Defining AI design principles

Introducing AI into the SolarSPELL tagging workflow wasn’t just about speed it was about trust, transparency, and human oversight. We designed SPELLTag’s AI experience to feel like a supportive assistant, not an invisible black box.

Contextual

AI suggestions adapt based on region, audience, and collection.

Human-in-the-Loop

The AI suggests. The human decides. Every tag is editable and never final until reviewed.

Speed and Clarity

Tagging should be fast and focused. The interface stays clean. The output stays clear.

Structured Reliability

AI suggestions adapt based on region, audience, and collection.

Invisible Assistance

The AI suggests. The human decides. Every tag is editable and never final until reviewed.

Transparent by Default

Each tag shows where it came from. Confidence scores guide trust. Nothing is hidden.

Design Decisions

Building on insights from discovery and guided by our AI design principles, we crafted an intuitive metadata tagging workflow for SPELL-CC that blends automation, transparency, and human oversight.

Here’s how we arrived at the final experience:

1. Seamless Onboarding & Context-Aware Tagging

Before tagging begins, users are prompted to set a few key preferences like desired Content Analysis Depth, Confidence Threshold, and options such as OCR or Language Detection.

These settings only need to be configured once during initial setup, but they’re fully customizable later.

Decision

These inputs help the AI understand tagging constraints and narrow its prediction scope. A simple, linear setup flow walks users through each configuration step.

Handling Critical Metadata Fields with Precision

Some key fields like rights holders, rights statements, and authors may not be auto-extracted, as many documents lack this info.

Because these fields are essential for legal and archival accuracy, users are prompted to review or enter them manually.

A “Custom Metadata” option is also available at the end of the process to fill in anything that’s missing.

2. Handling Critical Metadata Fields with Precision

Some key fields like rights holders, rights statements, and authors may not be auto-extracted, as many documents lack this info.

Because these fields are essential for legal and archival accuracy, users are prompted to review or enter them manually.

A “Custom Metadata” option is also available at the end of the process to fill in anything that’s missing.

Decision

This allows users to input missing values or tailor entries to suit certain standards ensuring both compliance and completeness.

Seamless Onboarding & Context-Aware Tagging

Before tagging begins, users are prompted to set a few key preferences like desired Content Analysis Depth, Confidence Threshold, and options such as OCR or Language Detection.

These settings only need to be configured once during initial setup, but they’re fully customizable later.

Add or Edit Metadata as Needed

While SPELL-Tag automates much of the tagging process, not all documents might come with complete or standard metadata.

To account for this, users have the option to manually add or edit metadata tags before final submission.

This step is limited to metadata fields only and does not apply to fields like author or rights information.

3. Add or Edit Metadata as Needed

While SPELL-Tag automates much of the tagging process, not all documents might come with complete or standard metadata.

To account for this, users have the option to manually add or edit metadata tags before final submission.

This step is limited to metadata fields only and does not apply to fields like author or rights information.

Decision

This allows users to ensure flexibility, control, and accuracy by supplementing or refining metadata tags especially when AI suggestions are require contextual adjustments.

Impact

Note: As the tool is still being piloted internally, impact is currently measured through user feedback and usage observation. Quantitative metrics will be added once broader deployment begins across multiple regions.

We launched the AI-powered metadata tagging assistant within SPELL-CC to a small group of internal users across various deployment regions. Even in early use, the tool is already helping:

Interns complete tagging faster and with more confidence.

Volunteers identify tags they might have otherwise missed.

New users feel more supported during onboarding.

Next Steps

Audit aligned with Human-AI Guidelines

After building the first version of the AI tagging assistant, we revisited the experience through the lens of Human-AI Interaction principles referencing Microsoft’s HAX Toolkit and our internal design values.

This audit helped surface improvements to enhance user trust, usability, and accuracy in hybrid workflows involving both AI and human review.

Upcoming Focus Areas

1. Improve Onboarding & User Guidance

Design a clearer first-time experience with tooltips and prompts to help users understand how AI tagging works, especially which fields (e.g., rights, author) need manual input.

2. Enhance Visual Clarity

Refine the UI to better differentiate between AI-suggested, user-edited, and low-confidence tags making the review process faster and more intuitive.

3. Support for Edge Cases & Offline Workflows

Introduce smarter fallbacks when AI is uncertain—such as offering common tag suggestions or enabling offline/manual workflows for regions with limited connectivity or ambiguous content.

Design Decisions

More projects

made with coffee, peer pressure & accidental naps

© Yukki

made with coffee, peer pressure & accidental naps

© Yukki

made with coffee, peer pressure & accidental naps

© Yukki