[ Trusted by builders from ]NetflixServiceNowCiscoAdobePayPalAmazonDatadogJPMorgan ChaseDell
[ Trusted by builders from ]NetflixServiceNowCiscoAdobePayPalAmazonDatadogJPMorgan ChaseDell
Prior.Runprior.run

Why Synthetic Users Give Better Design Feedback Than Real Ones

The counterintuitive case for simulated audience reactions.

·7 min read

The first objection is always the same: "But they're not real people."

Fair. Synthetic users are not real people. They don't have actual memories, actual jobs, or actual credit card debt. They're simulated — constructed from demographic data, personality models, life trajectory engines, and behavioral psychology research.

And for design feedback, they're better than real people in ways that matter. Not better at everything. Not a replacement for user research. But better at the specific task of evaluating whether a design will work for a given audience. Here's why.

The Articulation Problem

Real users can't tell you why.

Ask someone why they didn't click the "Start free trial" button and you'll get a post-hoc rationalization: "The page was too busy" or "I wasn't sure what I was signing up for." These aren't wrong, exactly — but they're not the real reason either.

The real reason is often a feeling shaped by context they can't access: the subscription they got burned by last year, the fact that they're between jobs and $29/month feels different than it did six months ago, the subtle pattern recognition that says "enterprise pricing pages that hide the actual cost usually have a catch."

Real users experience these reactions. They can't articulate them. In a usability study, they'll point to surface-level issues because those are easy to verbalize. The deeper trust hesitation, the life-context-shaped price sensitivity, the pattern-matched skepticism — these stay invisible.

Synthetic users don't have this limitation. Because we build the life behind the reaction — the job, the income, the financial stress, the history of products that earned or lost their trust — we can trace the reaction to its source. "This user hesitated because they were recently laid off and any recurring charge triggers financial anxiety" is a more useful insight than "some users found the pricing unclear."

We built the life behind the opinion.

The Scale Problem

You can't test 1,000 perspectives in an afternoon.

A usability study gives you 5-8 participants. That's enough for qualitative insight — discovering problems you didn't know existed. It's not enough for quantitative signal — understanding how broadly a problem affects your target audience.

When 6 out of 8 participants mention a concern, is that 75% of your audience or a quirk of your sample? With 8 participants, you can't know. The confidence interval is enormous.

Synthetic users solve this by running many independent evaluations simultaneously. Not copies of the same perspective — genuinely different people with different backgrounds, different skepticism levels, different life circumstances. When a strong majority flag the same trust issue, that's a signal you can act on. When only a handful do, it's a niche concern — worth noting, not worth redesigning around.

This is the same principle that makes A/B testing powerful: statistical significance from sample size. Synthetic users achieve it in minutes instead of weeks.

The Bias Problem

Real feedback has a selection bias nobody talks about.

Who signs up for user research? People with time, people who are comfortable giving feedback, people who are motivated by the incentive (usually $50-100). This is a specific demographic — and it's often not the demographic you're designing for.

Your most important users — the busy, skeptical, price-sensitive ones who make purchasing decisions quickly and don't tolerate friction — are the least likely to show up for a 45-minute usability session. The feedback you collect is from the people willing to give it, which is a systematically different group than the people you need to hear from.

Synthetic users have no selection bias. The panel is constructed to match your target audience exactly. If you're designing for price-sensitive online shoppers, every evaluator is a price-sensitive online shopper — not a mix of whoever responded to the Craigslist ad.

The Honesty Problem

People are polite. Too polite.

There's a well-documented phenomenon in user research: participants want to be helpful. They'll use your product, find the button, complete the task — and tell you it was "pretty easy, actually" even when they struggled. Social desirability bias makes people soften their reactions, especially in face-to-face settings.

This is why experienced researchers watch what people do, not what they say. But with design mockups — which are what most teams need feedback on — there's nothing to watch. You're asking for opinions on a static image. And people will be nicer about it than they would be in the wild.

Synthetic users are calibrated for skepticism, not politeness. We deliberately skew the panel toward critical evaluation because that's what surfaces useful feedback. A synthetic user who says "this pricing feels manipulative" is more useful than a real participant who says "the pricing is fine, I guess" while quietly deciding not to buy.

The Honest Limits

What synthetic users can't do.

Synthetic users aren't magic. They can't replace every type of user research.

They can't discover needs you haven't imagined — that requires open-ended conversation with real humans. They can't validate product-market fit — that requires real purchasing behavior. They can't test complex multi-session workflows — that requires longitudinal observation. And they can't fully replicate cultural nuance — that requires lived experience.

What they can do is evaluate whether a specific design works for a specific audience against a specific metric — faster and at larger scale than any other method. Use them to explore. Use real users to validate. That's the honest workflow, and it's dramatically faster than what most teams are doing today.

Under the Hood

How synthetic panels are constructed.

A synthetic user isn't a chatbot pretending to be a person. It's a layered behavioral model built from four interlocking systems, each contributing a different dimension of realistic human response.

The first layer is demographics — age, income, education, location, household composition. These aren't random assignments. They're drawn from distributions that match real population data, calibrated so that the panel mirrors the demographic composition of your actual target audience. If you're testing a checkout flow aimed at millennial parents in suburban markets, the panel reflects that — not a uniform spread of every possible demographic.

The second layer is personality modeling. Each synthetic user receives a personality profile built on established psychological frameworks — traits like openness to new products, price sensitivity, risk tolerance, and skepticism toward marketing claims. These traits aren't decorative. They directly shape how the user reacts to design choices. A user high in skepticism will fixate on trust signals (or the lack of them). A user high in openness will be more forgiving of unconventional layouts.

The third layer is life trajectory — the biographical narrative that gives context to reactions. This includes career history, financial situation, recent life events (a job change, a new baby, a recent bad experience with a subscription service), and purchasing patterns. A synthetic user who was recently hit with unexpected fees from a streaming service will react to your pricing page differently than one who hasn't. This is the layer that produces the insights real usability studies miss — because real participants don't volunteer their life context unprompted.

The fourth layer is behavioral calibration. Each synthetic user's responses are benchmarked against patterns from real-world behavioral research — conversion psychology, decision fatigue studies, trust formation research. This calibration ensures that synthetic reactions aren't just plausible stories but behaviorally grounded predictions. When a synthetic user says "I'd abandon this page," that prediction is rooted in documented behavioral patterns, not narrative invention.

The Decision Matrix

When to use synthetic vs. real users.

The question isn't whether synthetic users are "better" than real users in some absolute sense. It's which tool fits which stage of the design process. Here's the practical framework we recommend.

Use synthetic users when you need fast directional signal on a design decision — which of three checkout flows will create the least friction, whether your pricing page builds trust or erodes it, how a landing page resonates across different audience segments. Synthetic panels excel at breadth: testing many perspectives quickly, comparing options against each other, and identifying the biggest risks before you invest in higher-fidelity validation.

Use real users when you need depth: understanding why a specific behavior happens in lived context, discovering unmet needs through open-ended conversation, validating that real people in real situations will actually pay for what you're building. Longitudinal research, ethnographic studies, and product-market-fit interviews are irreplaceable — no simulation can replicate the richness of watching someone use your product in their actual environment.

Use both when the stakes are high. The most effective workflow we've seen runs a synthetic panel first to identify the top three concerns, then designs a focused user research session around those specific questions. Instead of spending 45 minutes per participant on broad exploration, the researcher spends 45 minutes deep-diving on the issues that matter. Teams that use this layered approach report cutting their research cycles by roughly half — not by doing less research, but by making every research session sharper.

The wrong approach is treating this as an either/or decision. Synthetic users don't replace real users any more than a code review replaces integration testing. They serve different functions at different stages. The teams getting the best results use both — synthetic for exploration and coverage, real for validation and depth.

Topicssynthetic usersuser testingbehavioral sciencedesign feedback

— see it in action

Meet the synthetic audience


— Keep reading