Show HN: Uxia: AI-powered user testing in minutes
As PMs (Google, Gopuff, Shiji), my cofounder and I ran hundreds of user tests over the years. And every time it felt broken: It could take 2–5 days just to get enough usable results
We spent hours watching recordings to extract a handful of insights
“Professional testers” rushed through tasks for pay, creating biased feedback
Platforms often started at €10k+ per year, with hidden fees on top
The result: slow iteration cycles, unreliable feedback, and user testing that often felt like a tax rather than a tool. We started asking: what if AI could help? Could synthetic users replicate realistic human behaviors?
Could we simulate thousands of testers instantly instead of recruiting them?
Would that make user testing accessible to any team, not just those with big budgets?
That exploration led us to build Uxia, an AI-powered user testing tool that: Delivers actionable insights in ~5 minutes, not days
Uses AI profiles to simulate thousands of behaviors
Offers flat pricing → unlimited tests, unlimited users, no hidden costs
You can upload a prototype, design, or flow and see where synthetic testers get stuck, what paths they take, and how they interact — all without waiting on recruiting or biased feedback loops. Of course, we know this approach isn’t perfect. Synthetic users won’t fully replace human intuition, but we think they can remove friction from the early stages of iteration and help teams test much more often. We’re also on PH today if you want to support the launch). We’d love your feedback:
Where do you think AI-driven testers could work well, and where would they fall short?
Would you trust synthetic feedback enough to guide real product decisions?
If you’ve struggled with user testing, what’s the one thing you wish could be different?
Thanks for reading, and happy to answer anything, we’ll be around all day.
Personas with goals/motivations so feedback is authentic.
Task simulations to mimic real tester workflows.
Consistency rules so a “designer” vs. “novice” behaves differently.
Aggregation to surface patterns across many synthetic users.
The LLM is the engine, but the structure around it makes the output closer to real user research than generic AI text. It’s not a full replacement for humans, but it’s fast, cheap, and great for early-stage insights.