

TIMELINE
March 2025 - November 2025
9 Months
TEAM
4 UX Designers
1 PhD Researcher
1 Faculty Staff
SKILLS
Figma Community Resource, AI Design Research, Concept Design, UX Design, Systems Thinking, Design Strategy
TLDR
OVERVIEW
The ACE Toolkit is a modular ideation and reflection system built as a Figma Community resource. It sits directly inside designers' existing workflows and helps them think through AI behavior, surface value tensions, and turn abstract AI decisions into something they can actually discuss and act on.
ROLE
I designed and built a modular Figma toolkit now published to the Figma Community, used by designers to reason about AI behavior and make more intentional decisions early in the ideation process. Over 9 months I collaborated with 4 design researchers using a research-through-design approach to explore how structured prompts and visual scaffolding could support more intentional AI concepting.
IMPACT
Published toolkit to the Figma Community, driving adoption directly within designers' workflows.
Strengthened early AI decision making by helping designers articulate AI behavior, values, and implications.
Supported individual reflection and collaborative critique through modular and reusable system.
Co-authored a research paper on the toolkit, accepted to DIS 2026, a premier ACM venue for interactive design research.
Designers want better ways to envision, reflect on, and critique their own AI concepts early in the process.
Designing for AI often feels ambiguous. Most designers don't have a good framework for reasoning about AI behaviors, value tensions, or what tradeoffs they're actually making. We wanted to make that part of the process easier and less intimidating.
Finding the Pattern
I mapped out how designers approach ideation and storyboarded the moments where they got stuck or had to backtrack. A few patterns kept coming up: difficulty translating loose concepts into concrete AI behaviors, trouble anticipating value tensions, and no lightweight way to reflect before committing to a direction. Those patterns shaped our first prototype.


The Moment Everything Reframed
Designers weren't struggling because they lacked ideas. They struggled because they didn't know what good AI behavior looked like or how to evaluate the implications of their choices. That shifted our focus toward helping them externalize their thinking, ground concepts in real AI capabilities, and evaluate decisions against human values.

First Attempt: A Standalone Web Tool
Our first prototype included an AI capability library, use cases, value reflection cards with sliders, and an evaluation matrix. It covered the right ground but the format created problems we didn't anticipate.

AI Capability Library
A list of core AI capabilities with definitions and examples to help designers understand what AI can actually do.
Reflection Cards with Sliders
Prompts to surface human-AI value tensions with sliders to indicate where a concept landed on each spectrum.

Use Cases
Examples followed by questions to encourage designers to think ahead about implications.
Evaluation Matrix
A structured way to assess feasibility and alignment with human values.
What Pilot Testing Revealed


Not integrated into designers' workflows.
Pilot testing with 30+ design professionals revealed that leading designers out of Figma, FigJam, and Miro created friction and reduced adoption likelihood.
Sliders created false precision and undermined reflection.
Sliders produced false precision and shifted focus to debating numbers instead of discussing value tensions.
Digital format limited collaborative critique.
Educators and design teams needed printable and tangible materials for group critique so they could rearrange, sort, and cluster concepts.
Designing Where Designers Already Are
I rebuilt the toolkit as a Figma Community resource so it lived directly where designers already work. Components became modular and moveable to support both individual reflection and group critique.
AI Capability Library
Refined the layout and added user guides explaining why each capability matters for concept reasoning.
Value Reflection Cards
Kept the prompts, removed the sliders, and made the cards fully moveable. Enhanced the color palette for better accessibility and WCAG contrast compliance.

Reflection Matrix Template
A quadrant canvas for mapping tensions, clustering insights, and documenting reasoning visually.

User Guides
Short guides showing how to browse capabilities, move into values reflection, and use the toolkit individually or as a team.
The ACE Toolkit
The system is built around three layers: a capability library that grounds concepts in real AI behaviors, value reflection cards that surface tensions without forcing resolution, and a matrix template for mapping decisions visually. Each layer is modular and independent in which designers can enter the system at any point depending on where they are in their process.

What This Changed For Me!
This project reinforced that designing for AI means designing for uncertainty. The assumptions we had about format and structure got challenged through pilot testing and the strongest decisions came from acknowledging what wasn't working. It only started working once it fit into how designers actually work instead of asking them to change their process.






