AI Concept Envisioning (ACE) Toolkit

AI Concept Envisioning (ACE) Toolkit

Designing for designers through a reflective AI concept ideation toolkit

Designing for designers through a reflective AI concept ideation toolkit

Designing for designers through a reflective AI concept ideation toolkit

Figma Community Resource

AI Design Research

Concept Design

UX Design

Systems Thinking

Design Strategy

OVERVIEW

Designers are eager to use AI but often lack the mental models to understand what AI should do or how to evaluate its impact early in the process. Our team built a toolkit available on Figma Community Resource that helps designers understand core AI capabilities, surface value tensions, and reflect before jumping into solutions.

WHAT I DID

I led the systems design for the toolkit, defined the structure and flow, and facilitated design sessions to understand where designers struggle during AI concept ideation. I created mid to high fidelity prototypes in Figma, built storyboard walkthroughs to visualize early workflows, and presented findings and iterations for faculty review.


Over 9 months, I collaborated with 4 other design researchers, using a research-through-design approach to explore how reflection, visual scaffolds, and structured prompts could support more intentional AI concepting.

THE CHALLENGE

Designing for AI often feels ambiguous and inaccessible. Most designers lack frameworks for reasoning about AI behaviors, data-driven interactions, and value tensions that emerge when humans and AI collaborate. Existing resources, such as Google’s People + AI Guidebook or Microsoft’s HAX Toolkit, tend to support for advanced teams and offer limited guidance at the ideation stage.

Designers need better ways to envision, reflect on, and critique their own AI concepts early in the process.
EARLY EXPLORATION

To understand these gaps, we mapped designers’ ideation processes and storyboarded the moments when they paused, questioned, or reframed their concepts.

Across these explorations, it revealed 3 key points in early AI ideation that shaped our first prototype.

Designers had difficulty translating loose concepts into concrete AI behaviors, found it hard to anticipate value tensions, and lacked lightweight cues that invited reflection before committing to a direction.
OPPORTUNITY

From early mapping and reflection exercises, we saw that designers didn’t struggle to generate AI ideas. They struggled to understand what “good” AI behavior looks like and how to reflect on the implications of their choices.

The opportunity to create a toolkit that helps designers externalize their thinking, ground ideas in real AI capabilities, and reflect on how those decisions align with human values.

Through this opportunity, I ideated various ways on how this toolkit can be integrated into the designers' workflow.

VERSION 1

The Web-based Toolkit

Guided by our mapping and storyboard, our first prototype included an AI capability library, use cases, value reflection cards, and evaluation sliders and matrix.

AI Capability Library

A selection list of the core capabilities of AI, adapted from CMU’s AI Brainstorming Kit.

Reflection Cards with Sliders

Prompts to surface human-AI value tensions.

Use Cases

Examples of use cases followed by questions to consider in order to encourage anticipatory thinking.

Evaluation Matrix

A structured way to assess feasibility and alignment with human values.

WHY IT DIDN'T WORK

Pilot Testing with 30 Design Professionals Surfaced Adoption Barriers.

Not integrated into designers' workflows.

Designers ideate in Figma, FigJam, and Miro. Leading those spaces created friction and reduced likelihood of adoption.

The slider model contradicted reflection.

Sliders produced false precision and shifted focus to debating numbers instead of discussing value tensions.

Digital format limited teamwork.

Educators and design teams needed printable and tangible materials for group critique so they could rearrange, sort, and cluster concepts.

VERSION 2

Moving the Toolkit into Figma Community Resource

The findings guided our next steps: we needed to design the toolkit where designers already work. We shifted the entire toolkit into a Figma Community resource, redesigning components to be modular, tactile, and remixable.

AI Capability Library

Refined layout of each definition, use cases, and added user guides around why these capability matters for concept reasoning.

Value Reflection Cards

Kept the prompts, removed the sliders, and made the cards fully moveable. We also enhanced our primary color palette for improved accessibility and contrast

Reflection Matrix Template

A quadrant canvas for mapping tensions, clustering insights, and documenting reasoning visually.

User Guides

Short guides were added to show how to browse capabilities, transition into values, and use the toolkit individually or as a team.

THE TOOLKIT

The redesigned toolkit supports designers where reflection naturally happens during ideation phase. It helps them articulate AI behavior early, identify value tensions before concepts solidify, and communicate decisions through shared artifacts. What started as a standalone website evolved into a collaborative resource shaped by the realities of real design practice.

IMPACT

Shipped the redesigned toolkit to the Figma Community, making it accessible and usable for designers in their workflow.

Improved adoptability by building a Figma-native toolkit that integrates directly into designers’ existing workflows.

Created a modular and tactile system that supports both individual reflection and group critique.

Allowed for more intentional AI concepting with prompts that help designers articulate AI behavior and anticipate implications earlier.

REFLECTION

This project reinforced that designing for AI also means designing for uncertainty. Our assumptions about format and structure were refined by pilot testing, and the strongest decisions came from embracing what was not working. The toolkit only became useful once it aligned with the workflows, tools, and constraints designers already rely on.


I also learned how much reflective scaffolding designers need when reasoning about AI behaviors. The moments of hesitation, confusion, and debate we saw during testing directly shaped the prompts, structures, and modularity of the final system. The toolkit only became useful once it mirrored real workflow habits, supported collaboration, and lowered the cognitive barrier to evaluating value tensions and implications.