Launch HN: Societies.io (YC W25) – AI simulations of your target audience

32 p-sharpe 28 8/1/2025, 12:13:12 PM
Hi HN, we’re Patrick and James! Artificial Societies (https://societies.io) lets you simulate your target audience so you can test marketing, messaging and content before you launch them.

Here’s a quick product demo: https://www.loom.com/share/c0ce8ab860c044c586c13a24b6c9b391?...

Marketers always say that half their spend will be wasted - they just don’t know which half. Real-world experiments help, but they’re too slow and expensive to run at scale. So, we’re building simulations that let you test rapidly and cheaply to find the best version of your message.

How it works:

- We create AI personas based on real-world data from actual individuals, collected from publicly available social media profiles and web sources.

- For each audience, we retrieve relevant personas from our database and map them out on an interactive social network graph, which is designed to replicate patterns of social influence.

- Once you’ve drafted your message, each experiment runs a multi-agent simulation where the personas react to your content and interact with each other - these take 30s to 2 minutes to run. Then, we then surface results and insights to help you improve your messaging.

Our two biggest challenges are accuracy and UI. We’ve tested our performance at predicting how LinkedIn posts perform, and the initial results have been promising. Our model has an R2 of 0.78 and we’ve found that “message spread” in our simulations is the single most important predictor of actual engagements when looking at posts made by the same authors. But there’s a long way to go in generalising these simulations to other contexts, and finding ground truth data for evals. We have some more info on accuracy here: https://societies.io/#accuracy

In terms of UI, our biggest challenge is figuring out whether the ‘experiment’ form factor is attractive to users. We’ve deliberately focused on this (over AI surveys) as experiments leverage our expertise in social influence and how ideas spread between personas.

James and I are both behavioral scientists by training but took different paths to get here. I helped businesses run A/B tests to boost sales and retention. Meanwhile, James became a data scientist and, in his spare time, hooked together 33,000 LLM chatbots and wrote a paper about it (https://bpspsychub.onlinelibrary.wiley.com/doi/pdfdirect/10....). He showed me the simulations and we decided to make a startup out of it.

Pricing: Artificial Societies is free to try. New users get 3 free credits and then a two week free trial. Pro accounts get unlimited simulations for $40/month. We’re planning on introducing teams later, and enterprise pricing for custom-built audiences.

We’d love you to give the tool a try and share your thoughts!

Comments (28)

milchek · 3h ago
First off, congrats on the funding and the progress so far!

I’ve seen a a couple of start ups pitching similar ideas lately - platforms that use AI personas or agents to simulate focus groups, either for testing products or collecting user insights. I can see the appeal in scaling audience feedback, reducing costs, reaching demographics that are traditionally hard to access.

That said, this is one of the areas of AI that gives me the most concern. I work at a company building AI tools for creative professionals, so I'm regularly exposed to the ethical and sustainability concerns in this space. But with AI personas specifically, there is something a little more troubling to me.

One recent pitch really stuck with me, in this case, the startup was proposing to use AI personas for focus groups on products and casually mentioned local government consultation. That's where I think this starts to veer into troubling territory. The idea of a local council using synthetic personas instead of talking directly to residents about policy decisions is alarming. It may be faster, cheaper, or even easier to implement but it fundamentally misunderstands what real feedback looks like.

LLMs don't live in communities. They don't vote, experience public services, or have lived context. No matter how well calibrated or "representative" the personas are claimed to be, they are ultimately a reflection of training data and assumptions - not the messy, multimodal, contradictory, emotional reality of human beings. And yet, decisions based on these synthetic signals could end up shaping products, experiences, or even policies that affect real people.

We're entering an era where human behaviour is being abstracted and compressed into models, and then treated as if it's a reliable proxy for actual human insight. That's a level of abstraction I'm deeply uncomfortable with and it's not a signal I think I would ever trust, regardless of how well it's marketed.

Would be curious to know what your approach is to convince others that may also be skeptical or not want to see this kind of tech being abused for the reasons listed above?

James-K-He · 2h ago
Thank you! We 100% agree. My research back in Cambridge was on misinformation, so we take the danger of misuse very seriously even as a tiny team of 3 people right now. As a social science researcher, one big challenge we faced was just how difficult it was to run experiments - it's quite unethical (and impossible) to have 100k people under policy A and 100k under policy B, so as a result, we as a society struggle to find the "golden path" with big issues like misinformation, climate change, or even everyday economics.

That's what motivated me to start researching in the area of creating "Artificial Societies" - first as an academic project, now as a product everyone can use, because I believe the best way to build a new technology is to try to make it useful for as many people as possible, rather than reserving it for governments and enterprises only. That's why unlike other builders in this space, we've made it a rule to never touch defence use cases; that's why we've gone against much business wisdom to produce a consumer product that anyone can use, rather than going after bigger budgets.

We totally agree that synthetic audiences should never replace listening to real people - we ourselves actually insist on doing manual user interviews so that we can feel our users pain ourselves. We hope what we build doesn't replace traditional methods, but expands what market research can do - that's why we try to simulate how people behave in communities and influence one another, so that we capture the ripple effects that a traditional survey ignores because it treats humans like isolated line items, rather than the communities we really are.

Hopefully, one day, just like a new plane is first tested in a wind tunnel before risking the life of a test pilot, a new policy will also first be tested in an artificial society, before risking unintended consequences in real participants. We are still in the early days though, so for now, we are just working hard to make a product people would love to use :)

taco_emoji · 35m ago
But "artificial societies" are only possible with AGI, not with LLMs. These are not reasoning engines. They do not think or have values or care or worry.
PaulHoule · 1h ago
kevdoran · 3h ago
Congrats on the launch! I've been watching for products in this space and this looks really nice. The UX is really well thought through. Great product demo.

Hadn't seen that paper, thanks for sharing it. This is the one I see cited most often that's got some similar vibes: https://arxiv.org/abs/2411.10109

James-K-He · 3h ago
Thank you so much!! Indeed, we were very inspired by the Stanford team's work as well :)
impostervt · 3h ago
I use AI to create customer avatars representing potential buyers of a product I may create (based on existing competitors and their customer reviews). I then use those customer avatars to help design the product.

I love the idea of going from "AI generated customer avatar" to "simulated real people". It would help add depth to the customer avatars, and lead to better product design.

I tried creating a society around products that I sell, but it looks like the "real-world data" is pulled from LinkedIn? I'm not necessarily targeting business people.

James-K-He · 3h ago
Thank you for trying us out! Yes, most of our personas are built from social media profiles + other deep research of publicly available data. For this reason, our customers have made the most of us by simulating professionals who are otherwise really hard to survey.
msukkarieh · 1h ago
This looks awesome. I've used GPT to do something similar but having a platform like this would be very powerful. Congrats on the launch! Very excited to try it out soon
James-K-He · 11m ago
Thank you so much! Let us know what you think!! Free to try at app.societies.io :)
jddj · 2h ago
If any of the ai caricatures had android phones running Firefox they'd tell you that page is really rough to scroll, particularly the first half
James-K-He · 2h ago
so sorry about this! fixing it now. we simulated how our content would land, but alas couldn't test the site before it was built :`)
ProofHouse · 1h ago
Recommend you don’t use loom had errors signing up to comment w google. Not wasting time on that. It was a cool demo and product, but you did gloss over, well skip, what the determining factors are that decide whether a user decides to interact with the post or not. Aka the secret sauce shouldn’t be secret. How is that determined what are the factors. Seems to be the most important part. Be clear on that and this product could have cool use. Congrats!
James-K-He · 9m ago
Thank you! Good tips haha, we've made a more polished launch video but thought to do a more dedicated loom for HN - feel free to check it out here: https://youtu.be/k6uo8PAn2vY
toinbis · 2h ago
Congrats on launch. Impressive demo. Keep shipping!
James-K-He · 2h ago
Thank you so much!
Lionga · 3h ago
[flagged]
dang · 35m ago
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html

James-K-He · 3h ago
Thankfully humans do tend to hallucinate haha
frakt0x90 · 3h ago
Honestly sounds extremely dystopian. Thankfully I dropped social media a long time ago, but imagining some company creating a digital version of me and my friends so they can create hyper-focused advertisements to manipulate me into engaging with something is beyond gross.
James-K-He · 2h ago
Very sorry that it came across that way for you! We built Societies in the hope that the better people can understand each other, the better we can innovate solutions that meet people’s wants and needs - much of bad policymaking and bad product designs came from not being able to foresee the unintended consequences, and we hope that one day we can help flag those risks in an artificial society first, before taking the bet that could impact real people in real life :)
reactordev · 2h ago
Every website you visit, every purchase you make online, every time you open safari on mobile, chrome on android, you’re being tracked. You don’t have to have social media anymore for a persona to be built for you.
wand3r · 2h ago
yes, that is even more dystopian. If America was run by non blood sucking vampires (both parties) we would start heavily taxing and incentivizing outcomes now before society completely collapses.
reactordev · 10m ago
That ship has sailed already… the only thing we can do now is start engineering Internet alternatives or better security. Crypto is broken when you have data centers full of H100s, quantum chips, and all Internet traffic routing through northern virginia.
zwilderrr · 3h ago
"our biggest challenges are accuracy" lol
James-K-He · 3h ago
indeed - human societies are very complex haha. we have managed to predict how social media react to messages at a 80%+ accuracy, but still early days in making the simulation accurate across all contexts and all populations :)
zwilderrr · 2h ago
best of luck!! great idea. would love it see how it executes.
James-K-He · 2h ago
thank you!! feel free to try it out by visiting app.societies.io on your computer :)