Show HN: Uxia: AI-powered user testing in minutes

3 borja_d 4 9/7/2025, 7:31:10 AM uxia.app ↗
Hi HN,

As PMs (Google, Gopuff, Shiji), my cofounder and I ran hundreds of user tests over the years. And every time it felt broken: It could take 2–5 days just to get enough usable results

We spent hours watching recordings to extract a handful of insights

“Professional testers” rushed through tasks for pay, creating biased feedback

Platforms often started at €10k+ per year, with hidden fees on top

The result: slow iteration cycles, unreliable feedback, and user testing that often felt like a tax rather than a tool. We started asking: what if AI could help? Could synthetic users replicate realistic human behaviors?

Could we simulate thousands of testers instantly instead of recruiting them?

Would that make user testing accessible to any team, not just those with big budgets?

That exploration led us to build Uxia, an AI-powered user testing tool that: Delivers actionable insights in ~5 minutes, not days

Uses AI profiles to simulate thousands of behaviors

Offers flat pricing → unlimited tests, unlimited users, no hidden costs

You can upload a prototype, design, or flow and see where synthetic testers get stuck, what paths they take, and how they interact — all without waiting on recruiting or biased feedback loops. Of course, we know this approach isn’t perfect. Synthetic users won’t fully replace human intuition, but we think they can remove friction from the early stages of iteration and help teams test much more often. We’re also on PH today if you want to support the launch). We’d love your feedback:

Where do you think AI-driven testers could work well, and where would they fall short?

Would you trust synthetic feedback enough to guide real product decisions?

If you’ve struggled with user testing, what’s the one thing you wish could be different?

Thanks for reading, and happy to answer anything, we’ll be around all day.

Comments (4)

trulykp · 1d ago
Fascinating idea. How does Uxia actually generate the results? Are these just LLM outputs, or something else?
borja_d · 1d ago
Thanks for your question! Uxia isn’t just raw LLM answers. We layer on:

Personas with goals/motivations so feedback is authentic.

Task simulations to mimic real tester workflows.

Consistency rules so a “designer” vs. “novice” behaves differently.

Aggregation to surface patterns across many synthetic users.

The LLM is the engine, but the structure around it makes the output closer to real user research than generic AI text. It’s not a full replacement for humans, but it’s fast, cheap, and great for early-stage insights.

thevicpec · 1d ago
How realistic are these “synthetic users”? Don’t you lose the intuition and randomness of real people?
borja_d · 1d ago
Great question. Synthetic users are best for early iteration cycles: spotting usability friction, validating flows, stress-testing designs. They’re not a replacement for real human intuition (e.g. emotions, motivations). Our vision is that they complement human testing: use synthetic users to test 20 times during a sprint, then validate the final version with real humans.