Mental Model: Process over Outcome with Mechanisms
Last week, designers celebrated my promotion at Little Spain at Hudson Yards. I bought everybody churros and ice cream.🍦Last year, I pushed back against promotions by focusing on my health. It is hard for me to voice how deeply sad and disappointed I was about my choice. I kept my head down and stopped people from recognizing my upward mobility. I was proven wrong when teammates recognized I was long overdue and at the wrong level. It's never about money, fame, or clout. I'm grateful to be treated as a human first, without corporate validation, and still earn respect from people.
This week, I'm sharing a fairytale I presented to 40+ designers about not settling for less and insisting on the highest standards.
Once upon a time,
There lived a designer named Cat. Every day, she would go about her boring, simple, ordinary life, from user research to designing experiences.
One day, she returned from a vacation and discovered her life was turned upside down. Even with the perfect design for development handoff, her greatest fear came true. The designs do not match the release.
“Why didn’t the team do what we agreed to? What were the designs from phase 0 to 2?” from heated engineering and project managers. Program managers wrote post-mortem perspectives and blasted them across the organization, kindling hot debate. Finally, senior leadership said, “Is there any governance?”
FFFF, life is no longer easy. 💩 Grappling with the new circumstances, people, opinions, and way of life.
How will Cat get life back to normal?
Cat deep dives to understand the problem and discovers the challenges each person face. In this org, engineering managers are goal owners. Project managers are scaled broadly at Amazon and have little time to deeply triage backlogs. Engineering managers cannot log interface-related issues and focuses on fixing engineering. Project managers cannot drive timelines for issues that are not identified. Finally, UX wants everyone to see the rate of return (ROI) for improved experiences.
Among these discoveries, there have been eight failed attempts from UX to convince senior leadership. The main outcome was, “That looks like a lot of work, not scalable."
Cat realizes this will be a tough sell, but "cats have 9 lives."🥁🥁🥁 On the 9th try, she will win this time.
Changing the mental model with mechanisms
A mental model is a simplified way to describe how our brains make sense of information to make a decision. When a certain worldview dominates your thinking, you’ll try to explain every problem you face through that worldview.
Mental models can be broken down into simple cause-and-effect statements that depict a process from input to output. This process is called a mechanism. Think about gear in a machine. This gear can be big or small to achieve desired outcomes.
Cause and effect: "The designs suck because the engineers didn't follow specs. We wouldn't be here today if they did what they were told." The mental model is the blame is on bad engineers, when the system itself has problems.
- The team knows they did wrong, and it won't happen again.
- My team is better, so that won't happen to me.
- If we were given appropriate deadlines, we could do it right.
Everybody has good intentions and wouldn't argue against bar-raising aspirations. The problem is good intentions don't work because you're not asking for a change. Good intentions are not scalable, ie. "I wish this group of people didn't have such low skills." ✨ Nice wishes, how did it go? ✨
Cat set out to create a User Experience Quality Assurance (UXQA) mechanism. There are 3 major critical steps, clarity, accountability, and governance & inspection.
To bring clarity to the issues users face, Cat documented high-quality tickets and captured high-quality details including user scenarios, acceptance criteria, related documentation, proposed solutions, and severity of impact. Cat directly tied 92% (13/14) of user issues to bypassed designs. The severity of user impact is a new rubric tied to the engineering service level agreement (SLA).
Input: High-quality documentation of user issues and pre-ranked by severity
Output: An aggregated tracker depicting all user issues reviewed weekly.
The tickets were aggregated and presented in weekly grooming sessions, sized by effort. This high-quality work built trust with the team, allowing Cat to move issues directly into the engineer's backlog. The product manager enjoys UX's support in triaging the backlog weekly and filling in the ranking. Cat created accountability by negotiating an agreed-upon rate of resolution per sprint, starting at a soft 10%.
Input: Prioritized issues and negotiated rate of resolution
Output: Rate of resolution over time.
Governance and Inspection
Engineers were holding up to their end of the bargain, resolving 10% over time. Cat aggregated rate of net new issues vs. open monthly. These aggregations proved the increased new is from not resolving UX issues, engineering fixing the wrong issues, and/or bugs. Cat identified tickets and owners of specific bugs that did not pass the documented acceptance criteria. This led to engineering granting Cat and PM the authority to review and blocker code from release.
Input: Rate of resolution, net new, age of open trend over time
Output: Code blocker power, high-quality production outcome
Continuing into the virtuous cycle: Issues identified outside of existing became new features and placed back into the first step, "clarity."
In a 4-month pilot (Aug-Dec 2022), the team resolved 72% of Severity 1 issue and 84% of Severity 3 issues. This brought 58.3% time savings (from 24 to 10 min), averaging 40 seconds per ticket, during grooming and prioritization session. Program managers benefited from a centralized tracker, gaining clarity and insight into who accountability status for each issue. Product managers appreciate the additional support to dive deep and help batch requests into a roadmap, ensuring we're always delivering the best possible customer experience.
Cat tied this metric KPI to the Amazon leadership principle, customer obsession. As engineers measure their impact in terms of ticket volume, size, and latency, the design team has provided key metrics that align with our leadership principles, fostering accountability and a customer-centric culture.
Sr. leadership is intrigued with a leadership principle score across all teams at scale. The score gives metric proof and holds teams accountable for delivering exceptional experiences for all users.
Cat returns back to her normal life. The story's ending was foretold by Sr leadership, "Cat will win over the engineering team, but it's not scalable. What will you do next?"
"Don't look at the scoreboard; play the next play."
To tell that story, in the near future, I will be writing the other four mental models to explain what distinguishes between great from exceptional.
For now, I'll leave you with a poem by ChatGPT:
Principals, Seniors, pleased as punch,
Watched as the mechanism did its crunch,
And as it whirred, and ticked, and turned,
Cat went the extra mile, undeterred.
With wins gained and traction found,
Two organizations came around,
And now, in February of twenty-three,
The mechanism's all they can see.
Inspecting all parts with care and zeal,
Ensuring velocity and speed to feel,
The journey ahead is long and winding,
But with this mechanism, they'll keep on grinding.
So let us praise the thought and skill,
The craft, the care, that made it thrill,
And with each tick and whir, and turn,
May the mechanism forever churn.
PS: Any feedback, thoughts, or comments? How will you apply mechanism thinking to your life?