Supporting Expert Decision-Making Under Uncertainty
Framing note
This case study describes work where the main result was not a finished feature, but a clearer strategy, safer decisions, and better alignment across teams in a complex AI setting. The real impact was in how we made decisions, not just what we built.
Context
Legal decision-making is rarely linear. It involves judgment, exceptions, and accountability, often under time pressure and with real consequences.
In this project, the product team faced the same kind of uncertainty.
We explored how AI could help with expert legal review in high-risk workflows. Meanwhile, the problem itself kept changing. Technical limits were still being discovered, cost and performance issues were not settled, and leadership expectations changed as we worked.
There was no set plan, no clear definition of 'done,' and no promise that our first idea for the feature would last through development.
As Principal Product Designer, I was responsible for helping the team handle this uncertainty without rushing into quick fixes.
The Problem We Were Really Solving
On the surface, the work appeared to be about improving AI-assisted review.
In practice, the more critical problem was strategic:
- How do we introduce AI into expert workflows without undermining trust?
- When does assistance become risk rather than value?
- How do we define “correctness” in a domain shaped by nuance and exception?
- How do we give leadership confidence to move forward responsibly?
Without clear direction, we risked focusing on visible progress instead of making choices we could stand behind.
Speed was not the main challenge.
The real challenge was making the important trade-offs clear to everyone.
My Role
As Principal Product Designer, I shaped both the design direction and the way we made decisions about it.
My role included:
- Setting the design vision for how AI should support expert judgment
- Facilitating alignment between product, engineering, and legal stakeholders
- Translating abstract risk into concrete product and design decisions
- Guiding leadership conversations about when to proceed, pause, or reframe
As the only designer on the project, I led the design direction and made sure our choices balanced user risk and business needs.
A lot of the work happened before we even touched the UI. We focused on how to frame the problem, set decision boundaries, and decide when AI help made sense.
What Made This Work Hard
This work was hard because the team did not share a single way of thinking about the problem.
Different disciplines were optimizing for different outcomes:
- Product balanced momentum and delivery expectations.
- Engineering needed clarity around feasibility, cost, and performance.
- Legal experts prioritized defensibility, edge cases, and accountability.
- Design often found itself in the middle of these different viewpoints.
Sometimes, making progress in one area caused problems in another. Working on different things at once and sharing responsibilities made things even less clear.
This was not because people failed to work together. It was just part of working in a space with no clear answers.
My job was not to remove that friction, but to help turn it into something useful.
Tensions We Had to Resolve
Instead of pushing everyone to agree on one answer, I helped the team talk about the main tensions that shaped our choices.
These tensions became a shared way to look at ideas as our direction changed.
Speed vs. Defensibility
Moving quickly mattered, but only if outcomes could withstand real legal scrutiny.
AI Confidence vs. Legal Uncertainty
AI systems tend to sound confident. Legal work often requires acknowledging what is unknown.
Automation vs. Accountability
Assistance should reduce effort, not shift responsibility away from experts.
Centralized “Correctness” vs. Contextual Judgment
Legal interpretation depends on context. A single global answer is often misleading.
By naming these tensions, we moved our conversations from arguing about features to talking about bigger strategic choices.
Principles That Shaped Direction
From these discussions, I helped the team agree on a set of principles to guide our decisions as things changed.
* Separate AI output from human judgment
AI could inform decisions, but authority remained with experts.
* Make uncertainty visible
Gaps, exceptions, and incomplete coverage needed to be explicit.
* Gate automation behind evidence
Suggestions should appear only once sufficient human-reviewed context exists.
* Design for intervention, not autopilot
The system should invite expert involvement at the moments that mattered most.
These principles shaped not just this project, but also how other teams looked at AI work.
Key Contributions
My main contribution was helping the team make better decisions, not just creating deliverables.
I:
- Defined review models that separated AI suggestions from final determinations
- Established thresholds that signaled when expert intervention was required
- Helped leadership identify where automation increased risk instead of reducing effort
- Reframed AI assistance as decision support, not decision authority
In practice, this meant refraIn practice, this meant changing our discussions to focus on clear decision points, like when to show AI suggestions, what proof was needed, and who made the final decision.p discussions and evaluation criteria, even as feature direction changed.
What Changed Because of This Work
Over time, the way the team talked about the work changed.
We moved from:
- “How do we surface answers faster?”
to “When is it safe to surface anything at all?”
And from:
- “Can the AI handle this?”
to “What does the user need to decide responsibly?”
Before this, we often jumped right into talking about how to build things. Later, the team started to pause and ask if a suggestion should even exist, and if so, under what conditions.
As a result:
- Product decisions started to focus more on risk and building trust.
- Engineering got clearer guidelines for when AI should step in.
- Leaders felt more confident about changing or pausing projects that brought too much risk.
- The principles developed here continued to inform related AI efforts.
In the end, the team chose to lead with a prototype, but our work made sure that decision was based on a clear understanding of the trade-offs, risks, and what it meant for expert trust.
The result was not just one finished feature, but a stronger, more flexible product direction.
Reflection
This project reminded me of an important lesson about senior design work.
Not all impact is visible in the final user interface.
Sometimes, the most valuable thing you can do is help teams slow down, ask better questions, and avoid choices that are hard to undo.
By accepting uncertainty and making trade-offs clear, I helped the organization handle AI-assisted decisions with more care and confidence.
Several of the principles we set here became reference points in later AI discussions, even as this project changed or moved in new directions.
That clarity still matters as the product keeps evolving.