Designing an AI-Powered Metadata Tagging Tool for SolarSPELL Content Curation

AI-assisted tool to support efficient and accurate metadata tagging across SolarSPELL’s digital libraries.

Jul'25 - Present

SolarSPELL

AI-Powered Tagging, UX for AI Tools, Content Classification

My role & Team

As the UI/UX Designer on this project, I led end-to-end experience design of the AI-assisted metadata tagging workflow for SPELL-CC at SolarSPELL.

As the UI/UX Designer on this project, I led end-to-end experience design of the AI-assisted metadata tagging workflow for SPELL-CC at SolarSPELL.

As the UI/UX Designer on this project, I led end-to-end experience design of the AI-assisted metadata tagging workflow for SPELL-CC at SolarSPELL.

Collaborated with

1 x Product Manager, 1 x Developer, 1 x Communications Specialist

Collaborated with

1 x Product Manager, 1 x Developer, 1 x Communications Specialist

Collaborated with

1 x Product Manager, 1 x Developer, 1 x Communications Specialist

Given our small team and limited resources, we had to make intentional trade-offs—focusing on high-impact features and letting go of certain nice-to-haves to ensure a scalable and user-focused solution.

Given our small team and limited resources, we had to make intentional trade-offs—focusing on high-impact features and letting go of certain nice-to-haves to ensure a scalable and user-focused solution.

Given our small team and limited resources, we had to make intentional trade-offs—focusing on high-impact features and letting go of certain nice-to-haves to ensure a scalable and user-focused solution.

Background

SPELL-CC (SolarSPELL Content Curation) is an internal tool used by volunteers, interns, and librarians to upload, tag, and organize educational resources for the SolarSPELL digital library.

SPELL-CC (SolarSPELL Content Curation) is an internal tool used by volunteers, interns, and librarians to upload, tag, and organize educational resources for the SolarSPELL digital library.

Each uploaded resource must be tagged with detailed metadata like subject, grade level, language, region, and copyright status to ensure it's easy to find and relevant to the offline communities it's intended for.

Each uploaded resource must be tagged with detailed metadata like subject, grade level, language, region, and copyright status to ensure it's easy to find and relevant to the offline communities it's intended for.

Dreamscape Learn (DSL), a collaboration between ASU and Dreamscape Immersive, delivers VR-based learning experiences.

Its mission is to redefine education by combining cinematic storytelling with cutting-edge VR technology, enabling students to step into immersive worlds and learn through exploration.

Problem Space
Understanding the Tagging Experience in SPELL-CC

Though SPELL-CC already allowed users to upload and tag resources, the process was entirely manual and deeply reliant on each user's interpretation of what “good metadata” looked like.

Though SPELL-CC already allowed users to upload and tag resources, the process was entirely manual and deeply reliant on each user's interpretation of what “good metadata” looked like.

Manual metadata tagging was slow, inconsistent, and unscalable.

Manual metadata tagging was slow, inconsistent, and unscalable.

Our challenge was:

How do we embed an AI-driven tagging engine into an existing manual system without breaking trust, over-automating, or losing context?

Dreamscape Learn (DSL) approached our team to create a centralized enterprise platform that would speed up request vetting, handle special cases upfront, and simplify operations as DSL scaled immersive learning.

Our challenge was:

How do we embed an AI-driven tagging engine into an existing manual system without breaking trust, over-automating, or losing context?

Dreamscape Learn (DSL) approached our team to create a centralized enterprise platform that would speed up request vetting, handle special cases upfront, and simplify operations as DSL scaled immersive learning.

Our challenge was:

How do we embed an AI-driven tagging engine into an existing manual system without breaking trust, over-automating, or losing context?

Dreamscape Learn (DSL) approached our team to create a centralized enterprise platform that would speed up request vetting, handle special cases upfront, and simplify operations as DSL scaled immersive learning.

With the rapid adoption of Dreamscape Learn’s immersive VR pods across multiple campuses, the existing booking management process became a major operational bottleneck.


Previously, instructors submitted booking requests via emails and spreadsheets, and admins handled approvals manually. The 3 significant challenges here were:

Discovery

Understanding the Metadata Workflow in SPELL-CC

Understanding the Metadata Workflow in SPELL-CC

While reviewing how users interacted with SPELL-CC, we saw that tagging varied by user and their understanding of metadata.

While reviewing how users interacted with SPELL-CC, we saw that tagging varied by user and their understanding of metadata.

Even with training, there was no guarantee that one person’s interpretation of “Secondary School” or “Public Health” matched another’s.

Even with training, there was no guarantee that one person’s interpretation of “Secondary School” or “Public Health” matched another’s.

  • Some users skipped tags they weren’t sure about, leaving fields blank.

  • Some users skipped tags they weren’t sure about, leaving fields blank.

  • Others used similar but inconsistent tags like “Storybook” vs. “storybook” or “PDF” vs. “Pdf.”

  • Others used similar but inconsistent tags like “Storybook” vs. “storybook” or “PDF” vs. “Pdf.”

  • Rights-related fields were often neglected because users didn’t know the license type or hadn’t contacted the rights holder.

  • Rights-related fields were often neglected because users didn’t know the license type or hadn’t contacted the rights holder.

  • Users from different regions tagged the same subject in different ways, which made cross-region search unreliable.

  • Users from different regions tagged the same subject in different ways, which made cross-region search unreliable.

This meant the quality of metadata, a crucial part of how SolarSPELL libraries are organized and searched depended entirely on who was tagging that day.

This meant the quality of metadata, a crucial part of how SolarSPELL libraries are organized and searched depended entirely on who was tagging that day.

Defining AI design principles

Introducing AI into the SolarSPELL tagging workflow wasn’t just about speed it was about trust, transparency, and human oversight. We designed SPELLTag’s AI experience to feel like a supportive assistant, not an invisible black box.

Introducing AI into the SolarSPELL tagging workflow wasn’t just about speed it was about trust, transparency, and human oversight. We designed SPELLTag’s AI experience to feel like a supportive assistant, not an invisible black box.

Contextual

Contextual

AI suggestions adapt based on region, audience, and collection.

AI suggestions adapt based on region, audience, and collection.

Human-in-the-Loop

Human-in-the-Loop

The AI suggests. The human decides. Every tag is editable and never final until reviewed.

The AI suggests. The human decides. Every tag is editable and never final until reviewed.

Speed and Clarity

Speed and Clarity

Tagging should be fast and focused. The interface stays clean. The output stays clear.

Tagging should be fast and focused. The interface stays clean. The output stays clear.

Structured Reliability

Structured Reliability

AI suggestions adapt based on region, audience, and collection.

AI suggestions adapt based on region, audience, and collection.

Invisible Assistance

Invisible Assistance

The AI suggests. The human decides. Every tag is editable and never final until reviewed.

The AI suggests. The human decides. Every tag is editable and never final until reviewed.

Transparent by Default

Transparent by Default

Each tag shows where it came from. Confidence scores guide trust. Nothing is hidden.

Each tag shows where it came from. Confidence scores guide trust. Nothing is hidden.

Design Decisions

Solution

Building on insights from discovery and guided by our AI design principles, we crafted an intuitive metadata tagging workflow for SPELL-CC that blends automation, transparency, and human oversight.

Building on insights from discovery and guided by our AI design principles, we crafted an intuitive metadata tagging workflow for SPELL-CC that blends automation, transparency, and human oversight.

Here’s how we arrived at the final experience:

Here’s how we arrived at the final experience:

1. Seamless Onboarding & Context-Aware Tagging

Before tagging begins, users are prompted to set a few key preferences like desired Content Analysis Depth, Confidence Threshold, and options such as OCR or Language Detection.

Before tagging begins, users are prompted to set a few key preferences like desired Content Analysis Depth, Confidence Threshold, and options such as OCR or Language Detection.

These settings only need to be configured once during initial setup, but they’re fully customizable later.

These settings only need to be configured once during initial setup, but they’re fully customizable later.

Decision

Decision

These inputs help the AI understand tagging constraints and narrow its prediction scope. A simple, linear setup flow walks users through each configuration step.

These inputs help the AI understand tagging constraints and narrow its prediction scope. A simple, linear setup flow walks users through each configuration step.

2. Handling Critical Metadata Fields with Precision

Some key fields like rights holders, rights statements, and authors may not be auto-extracted, as many documents lack this info.

Some key fields like rights holders, rights statements, and authors may not be auto-extracted, as many documents lack this info.

Because these fields are essential for legal and archival accuracy, users are prompted to review or enter them manually.

Because these fields are essential for legal and archival accuracy, users are prompted to review or enter them manually.

A “Custom Metadata” option is also available at the end of the process to fill in anything that’s missing.

A “Custom Metadata” option is also available at the end of the process to fill in anything that’s missing.

Decision

Decision

This allows users to input missing values or tailor entries to suit certain standards ensuring both compliance and completeness.

This allows users to input missing values or tailor entries to suit certain standards ensuring both compliance and completeness.

3. Add or Edit Metadata as Needed

While SPELL-Tag automates much of the tagging process, not all documents might come with complete or standard metadata.

While SPELL-Tag automates much of the tagging process, not all documents might come with complete or standard metadata.

To account for this, users have the option to manually add or edit metadata tags before final submission.

To account for this, users have the option to manually add or edit metadata tags before final submission.

This step is limited to metadata fields only and does not apply to fields like author or rights information.

This step is limited to metadata fields only and does not apply to fields like author or rights information.

Decision

Decision

This allows users to ensure flexibility, control, and accuracy by supplementing or refining metadata tags especially when AI suggestions are require contextual adjustments.

This allows users to ensure flexibility, control, and accuracy by supplementing or refining metadata tags especially when AI suggestions are require contextual adjustments.

Role-Based Interviews (Understanding the personas)

The goal was to understand the needs of each admin role involved in vetting and coordination.

The goal was to understand the needs of each admin role involved in vetting and coordination.

  • 1:1 interviews with DSL vetting staff

  • Pod managers

  • Escalation leads

  • 1:1 interviews with DSL vetting staff

  • Pod managers

  • Escalation leads

Persona Needs

  1. Pod Managers: Wanted real-time pod capacity visibility to prevent overbookings.

  1. Admins: Required a structured escalation process to handle requests consistently.

  1. Field Staff: Needed mobile/tab workflows to approve or reject bookings on the move.

Persona Needs

  1. Pod Managers: Wanted real-time pod capacity visibility to prevent overbookings.

  1. Admins: Required a structured escalation process to handle requests consistently.

  1. Field Staff: Needed mobile/tab workflows to approve or reject bookings on the move.

Impact

Note: As the tool is still being piloted internally, impact is currently measured through user feedback and usage observation. Quantitative metrics will be added once broader deployment begins across multiple regions.

Note: As the tool is still being piloted internally, impact is currently measured through user feedback and usage observation. Quantitative metrics will be added once broader deployment begins across multiple regions.

We launched the AI-powered metadata tagging assistant within SPELL-CC to a small group of internal users across various deployment regions. Even in early use, the tool is already helping:

We launched the AI-powered metadata tagging assistant within SPELL-CC to a small group of internal users across various deployment regions. Even in early use, the tool is already helping:

Interns complete tagging faster and with more confidence.

Interns complete tagging faster and with more confidence.

Volunteers identify tags they might have otherwise missed.

Volunteers identify tags they might have otherwise missed.

New users feel more supported during onboarding.

New users feel more supported during onboarding.

Next Steps

Audit aligned with Human-AI Guidelines

Audit aligned with Human-AI Guidelines

After building the first version of the AI tagging assistant, we revisited the experience through the lens of Human-AI Interaction principles referencing Microsoft’s HAX Toolkit and our internal design values.

After building the first version of the AI tagging assistant, we revisited the experience through the lens of Human-AI Interaction principles referencing Microsoft’s HAX Toolkit and our internal design values.

This audit helped surface improvements to enhance user trust, usability, and accuracy in hybrid workflows involving both AI and human review.

This audit helped surface improvements to enhance user trust, usability, and accuracy in hybrid workflows involving both AI and human review.

Upcoming Focus Areas

1. Improve Onboarding & User Guidance

Design a clearer first-time experience with tooltips and prompts to help users understand how AI tagging works, especially which fields (e.g., rights, author) need manual input.

Design a clearer first-time experience with tooltips and prompts to help users understand how AI tagging works, especially which fields (e.g., rights, author) need manual input.

2. Enhance Visual Clarity

Refine the UI to better differentiate between AI-suggested, user-edited, and low-confidence tags making the review process faster and more intuitive.

Refine the UI to better differentiate between AI-suggested, user-edited, and low-confidence tags making the review process faster and more intuitive.

3. Support for Edge Cases & Offline Workflows

Introduce smarter fallbacks when AI is uncertain—such as offering common tag suggestions or enabling offline/manual workflows for regions with limited connectivity or ambiguous content.

Introduce smarter fallbacks when AI is uncertain—such as offering common tag suggestions or enabling offline/manual workflows for regions with limited connectivity or ambiguous content.

More projects

More projects

More projects

made with coffee, peer pressure & accidental naps

© Yukki

made with coffee, peer pressure & accidental naps

© Yukki

made with coffee, peer pressure & accidental naps

© Yukki

For a detailed case study please view it on desktop.

Sep'24 - Dec'24

DSL EdPlus

B2B, SaaS, Product Design

Designing an Enterprise Admin Portal for Speed, Clarity, and Scale

Designed the Dreamscape Learn Admin Portal from scratch to replace email-based bookings, improving request tracking by 60% and reducing admin workload by 40%.

A B2B enterprise tool built for Dreamscape Learn (DSL) to simplify session vetting, pod allocation, and compliance workflows enabling admins to manage complex operations.

A B2B enterprise tool built for Dreamscape Learn (DSL) to simplify session vetting, pod allocation, and compliance workflows enabling admins to manage complex operations.

Sep'24 - Nov'24

DSL EdPlus

B2B, SaaS, Product Design

DSL EdPlus

Designing an Enterprise Admin Portal for Speed, Clarity, and Scale

A B2B enterprise tool built for Dreamscape Learn (DSL) to simplify session vetting, pod allocation, and compliance workflows enabling admins to manage complex operations.

Back to top