A woman in Denver is looking at your pricing page. She's 34. Marketing manager. Makes $95,000. She just got promoted. Her last three software purchases worked out. She sees $49/month and thinks: reasonable for a professional tool. She clicks "Start trial."
Another woman in Denver is looking at the same pricing page. She's also 34. Also a marketing manager. Also makes $95,000. But she was laid off eight weeks ago. She just started a new job at lower pay. Her savings took a hit. She sees $49/month and her stomach tightens. Another recurring charge. She closes the tab.
Same demographics. Same page. Opposite outcomes. And every design feedback tool on the market — every AI review, every heuristic evaluation, every "pretend you're a user" prompt — would give you identical feedback for both of them. Because none of them know the difference between these two people.
That difference is the only thing that matters.
Same person. Same page. Different life.
Identical demographics. Different life events. The pricing page didn't change — her circumstances did.
Lisa Torres
34 · Marketing Manager · Denver, CO
Life context
Just got promoted. Third raise in two years. Last three software purchases worked out well. Emergency fund is healthy.
Psychological state
Reaction to $49/mo pricing page
“Reasonable for a professional tool. Scanning features, not price. Already thinking about how to pitch this to her team.”
Lisa Torres
34 · Marketing Manager · Denver, CO
Life context
Laid off 8 weeks ago. New job at lower pay. Savings took a hit. Last subscription auto-renewed at the worst possible time.
Psychological state
Reaction to $49/mo pricing page
“Another recurring charge. Can I cancel easily? What happens after the trial? Calculating monthly costs before looking at a single feature.”
Generic AI feedback would give identical advice for both.
Context is the only variable that matters.
— The Blind Spot
Demographics describe who someone is. Context determines what they do.
Ask any AI to review a checkout page and you'll get some version of: "Consider adding social proof. The CTA could be more prominent. The pricing tiers may confuse some users." It says this about every checkout page. It said it about yours. It said it about your competitor's. The feedback is technically true and practically useless — because it has no idea who is looking at the screen.
The insight hiding inside fifty years of behavioral economics is disarmingly simple: people don't make decisions based on who they are. They make decisions based on where they are in their life right now. A recent layoff changes your relationship with money more than your income bracket ever did. A new baby changes your risk tolerance more than your age. A bad experience with a subscription service last month changes how you read every pricing page this month.
This isn't a theory. It's the reason A/B tests produce different results in January (post-holiday financial anxiety) than in March (tax refund confidence). It's why the same landing page converts differently on a Monday morning than a Friday afternoon. The page didn't change. The people looking at it are in a different psychological place.
Demographics describe who someone is. Context determines what they do.
— The Lineage
Economists solved this problem decades ago. Nobody applied it to design.
In the 1970s, Thomas Schelling demonstrated something that upended economics: you can't predict how a population will behave by modeling the average person. You have to model individual agents — each with their own preferences, constraints, and circumstances — and let them act independently. The aggregate behavior that emerges is often completely different from what the "average" would predict.
This became agent-based modeling, and it's been standard practice in economics, epidemiology, and social science ever since. Markets, disease outbreaks, traffic patterns, urban development — all modeled by simulating diverse individuals, not by averaging them.
Nobody had applied this to design feedback. The entire industry was stuck on two approaches: ask one AI for one opinion, or ask a small group of real humans (who are expensive, slow, and systematically different from your actual users). Nobody was building a diverse synthetic population and letting the variance tell the story.
That's what we built.
— How It Works
We don't simulate opinions. We simulate the lives that produce them.
Every synthetic user in our panel is a complete person — not a persona slide, not a demographic label, not an LLM playing dress-up. A person with a specific job they feel a specific way about, a financial situation that's either comfortable or strained, a history of products that earned their trust and products that burned it.
The trick isn't the demographics. It's the life trajectory. Every synthetic user has a biographical timeline — career changes, relationships, financial events, health milestones — generated from the same longitudinal research datasets that economists use to model population behavior. These events aren't random. They follow the same statistical patterns that real lives follow: correlations, preconditions, clustering.
And here's the part that changes everything: those life events don't just sit in a database. They compute into a real-time psychological state. A recent layoff doesn't just flag someone as "unemployed" — it shifts their confidence, their willingness to take financial risk, their patience with complicated flows, their sensitivity to price. And the magnitude of that shift depends on their personality. Resilient people recover faster. Anxious people carry it longer.
So when two synthetic users look at your pricing page and have opposite reactions, you know exactly why. Not "some users might find it confusing" — but "this specific person, with this specific life context, would hesitate here for this specific reason." That's the difference between feedback you file away and feedback you act on.
— The Moment It Clicks
When you see the full spectrum, you stop designing for the average.
A fintech startup showed us their landing page. A generic AI review said the usual: strong value proposition, consider adding trust badges, CTA should be above the fold.
Our synthetic panel told a completely different story. The financially confident users — the ones with stable jobs and recent positive financial events — barely looked at the trust signals. They were scanning for features, looking for competitive differentiation, impatient with the marketing copy. For them, the page had too much reassurance and not enough substance.
The financially stressed users — recent job changes, tighter budgets, bad experiences with hidden fees — couldn't get past the pricing section. The annual billing discount, which the team designed to increase conversion, was actually triggering anxiety in this group. A year-long commitment feels reckless when you don't know what next month looks like. They needed monthly billing with zero commitment language.
Both groups were right. The generic AI review was useless. The team redesigned the page with two paths — and conversion went up across both segments. Not because they added social proof or moved the CTA. Because they finally understood that two different people needed two different things from the same page.
— The Honest Version
What this is and what it isn't.
This isn't user research. It won't replace talking to real humans, watching them struggle, or discovering needs you haven't imagined. What it does is something no usability study can: stress-test a design against the full diversity of life circumstances it will face in the real world. In minutes.
Think of it like a wind tunnel. Not real weather — but specific enough to show you where the drag is before you fly. The teams getting the most value use it as the first pass: surface the top concerns across a diverse population, then design their real research around those specific questions. Not less research. Sharper research.