Designing an AI-Powered Metadata Tagging Tool for SolarSPELL Content Curation
AI-assisted tool to support efficient and accurate metadata tagging across SolarSPELL’s digital libraries.
Jul'25 - Present
SolarSPELL
My role & Team
As the UI/UX Designer on this project, I led end-to-end experience design of the AI-assisted metadata tagging workflow for SPELL-CC at SolarSPELL.
Collaborated with
1 x Product Manager, 1 x Developer, 1 x Communications Specialist
Given our small team and limited resources, we had to make intentional trade-offs—focusing on high-impact features and letting go of certain nice-to-haves to ensure a scalable and user-focused solution.
Background
SPELL-CC (SolarSPELL Content Curation) is an internal tool used by volunteers, interns, and librarians to upload, tag, and organize educational resources for the SolarSPELL digital library.
Each uploaded resource must be tagged with detailed metadata like subject, grade level, language, region, and copyright status to ensure it's easy to find and relevant to the offline communities it's intended for.
Problem Space
Understanding the Tagging Experience in SPELL-CC
Though SPELL-CC already allowed users to upload and tag resources, the process was entirely manual and deeply reliant on each user's interpretation of what “good metadata” looked like.
Manual metadata tagging was slow, inconsistent, and unscalable.
Discovery
Understanding the Metadata Workflow in SPELL-CC
While reviewing how users interacted with SPELL-CC, we saw that tagging varied by user and their understanding of metadata.
Even with training, there was no guarantee that one person’s interpretation of “Secondary School” or “Public Health” matched another’s.
Some users skipped tags they weren’t sure about, leaving fields blank.
Others used similar but inconsistent tags like “Storybook” vs. “storybook” or “PDF” vs. “Pdf.”
Rights-related fields were often neglected because users didn’t know the license type or hadn’t contacted the rights holder.
Users from different regions tagged the same subject in different ways, which made cross-region search unreliable.
Defining AI design principles
Introducing AI into the SolarSPELL tagging workflow wasn’t just about speed it was about trust, transparency, and human oversight. We designed SPELLTag’s AI experience to feel like a supportive assistant, not an invisible black box.
Contextual
AI suggestions adapt based on region, audience, and collection.
Human-in-the-Loop
The AI suggests. The human decides. Every tag is editable and never final until reviewed.
Speed and Clarity
Tagging should be fast and focused. The interface stays clean. The output stays clear.
Structured Reliability
AI suggestions adapt based on region, audience, and collection.
Invisible Assistance
The AI suggests. The human decides. Every tag is editable and never final until reviewed.
Transparent by Default
Each tag shows where it came from. Confidence scores guide trust. Nothing is hidden.
Impact
Note: As the tool is still being piloted internally, impact is currently measured through user feedback and usage observation. Quantitative metrics will be added once broader deployment begins across multiple regions.
We launched the AI-powered metadata tagging assistant within SPELL-CC to a small group of internal users across various deployment regions. Even in early use, the tool is already helping:
Interns complete tagging faster and with more confidence.
Volunteers identify tags they might have otherwise missed.
New users feel more supported during onboarding.
Next Steps
Audit aligned with Human-AI Guidelines
After building the first version of the AI tagging assistant, we revisited the experience through the lens of Human-AI Interaction principles referencing Microsoft’s HAX Toolkit and our internal design values.
This audit helped surface improvements to enhance user trust, usability, and accuracy in hybrid workflows involving both AI and human review.
Upcoming Focus Areas





