Launch HN: Societies.io (YC W25) – AI simulations of your target audience
Here’s a quick product demo: https://www.loom.com/share/c0ce8ab860c044c586c13a24b6c9b391?...
Marketers always say that half their spend will be wasted - they just don’t know which half. Real-world experiments help, but they’re too slow and expensive to run at scale. So, we’re building simulations that let you test rapidly and cheaply to find the best version of your message.
How it works:
- We create AI personas based on real-world data from actual individuals, collected from publicly available social media profiles and web sources.
- For each audience, we retrieve relevant personas from our database and map them out on an interactive social network graph, which is designed to replicate patterns of social influence.
- Once you’ve drafted your message, each experiment runs a multi-agent simulation where the personas react to your content and interact with each other - these take 30s to 2 minutes to run. Then, we then surface results and insights to help you improve your messaging.
Our two biggest challenges are accuracy and UI. We’ve tested our performance at predicting how LinkedIn posts perform, and the initial results have been promising. Our model has an R2 of 0.78 and we’ve found that “message spread” in our simulations is the single most important predictor of actual engagements when looking at posts made by the same authors. But there’s a long way to go in generalising these simulations to other contexts, and finding ground truth data for evals. We have some more info on accuracy here: https://societies.io/#accuracy
In terms of UI, our biggest challenge is figuring out whether the ‘experiment’ form factor is attractive to users. We’ve deliberately focused on this (over AI surveys) as experiments leverage our expertise in social influence and how ideas spread between personas.
James and I are both behavioral scientists by training but took different paths to get here. I helped businesses run A/B tests to boost sales and retention. Meanwhile, James became a data scientist and, in his spare time, hooked together 33,000 LLM chatbots and wrote a paper about it (https://bpspsychub.onlinelibrary.wiley.com/doi/pdfdirect/10....). He showed me the simulations and we decided to make a startup out of it.
Pricing: Artificial Societies is free to try. New users get 3 free credits and then a two week free trial. Pro accounts get unlimited simulations for $40/month. We’re planning on introducing teams later, and enterprise pricing for custom-built audiences.
We’d love you to give the tool a try and share your thoughts!
If it isn't based on ACTUAL buyers who have ACTUAL input, what is it really doing? Great job at creating something, but at the same time, it feels kind of unnecessary.
You're essentially telling people what you THINK they'll want.
I’ve seen a a couple of start ups pitching similar ideas lately - platforms that use AI personas or agents to simulate focus groups, either for testing products or collecting user insights. I can see the appeal in scaling audience feedback, reducing costs, reaching demographics that are traditionally hard to access.
That said, this is one of the areas of AI that gives me the most concern. I work at a company building AI tools for creative professionals, so I'm regularly exposed to the ethical and sustainability concerns in this space. But with AI personas specifically, there is something a little more troubling to me.
One recent pitch really stuck with me, in this case, the startup was proposing to use AI personas for focus groups on products and casually mentioned local government consultation. That's where I think this starts to veer into troubling territory. The idea of a local council using synthetic personas instead of talking directly to residents about policy decisions is alarming. It may be faster, cheaper, or even easier to implement but it fundamentally misunderstands what real feedback looks like.
LLMs don't live in communities. They don't vote, experience public services, or have lived context. No matter how well calibrated or "representative" the personas are claimed to be, they are ultimately a reflection of training data and assumptions - not the messy, multimodal, contradictory, emotional reality of human beings. And yet, decisions based on these synthetic signals could end up shaping products, experiences, or even policies that affect real people.
We're entering an era where human behaviour is being abstracted and compressed into models, and then treated as if it's a reliable proxy for actual human insight. That's a level of abstraction I'm deeply uncomfortable with and it's not a signal I think I would ever trust, regardless of how well it's marketed.
Would be curious to know what your approach is to convince others that may also be skeptical or not want to see this kind of tech being abused for the reasons listed above?
That's what motivated me to start researching in the area of creating "Artificial Societies" - first as an academic project, now as a product everyone can use, because I believe the best way to build a new technology is to try to make it useful for as many people as possible, rather than reserving it for governments and enterprises only. That's why unlike other builders in this space, we've made it a rule to never touch defence use cases; that's why we've gone against much business wisdom to produce a consumer product that anyone can use, rather than going after bigger budgets.
We totally agree that synthetic audiences should never replace listening to real people - we ourselves actually insist on doing manual user interviews so that we can feel our users pain ourselves. We hope what we build doesn't replace traditional methods, but expands what market research can do - that's why we try to simulate how people behave in communities and influence one another, so that we capture the ripple effects that a traditional survey ignores because it treats humans like isolated line items, rather than the communities we really are.
Hopefully, one day, just like a new plane is first tested in a wind tunnel before risking the life of a test pilot, a new policy will also first be tested in an artificial society, before risking unintended consequences in real participants. We are still in the early days though, so for now, we are just working hard to make a product people would love to use :)
Hadn't seen that paper, thanks for sharing it. This is the one I see cited most often that's got some similar vibes: https://arxiv.org/abs/2411.10109
I love the idea of going from "AI generated customer avatar" to "simulated real people". It would help add depth to the customer avatars, and lead to better product design.
I tried creating a society around products that I sell, but it looks like the "real-world data" is pulled from LinkedIn? I'm not necessarily targeting business people.
https://news.ycombinator.com/newsguidelines.html