Social AI companions pose unacceptable risks to teens and children under 18

56 CharlesW 43 5/7/2025, 5:42:56 PM commonsensemedia.org ↗

Comments (43)

disambiguation · 14h ago
I want to know if anyone has answered the question: what does a healthy relationship with this thing look like?

All kids grow up and are eventually exposed to sex drugs and rock'n roll. These things are part of our world, you have to coexist with them. The problem with video games, social media, AI, and all things tech is they're so new and evolving so fast that no one really knows what a healthy relationship looks like. Though awareness is growing and we've started asking questions like: how much screen time is ok? At what age do I allow my kid to make a social media account? Should we be using our phones last thing before bed and first thing in the morning? Not to mention more wide spread issues of privacy and exposure to content ranging from amusing to abusive. AI as a "convincing BS artist" you can engage with endlessly is something I struggle to wrap my head around. My personal policy is to keep AI on a short leash; use it sparsely, don't overly rely on it, and always question its assertions. But allowing unrestricted access to a powerful tool that requires self control and good judgment is inviting disaster. Banning it for kids makes sense, but what about everyone else?

Animats · 16h ago
Better social AI companions might help. They would be a step up from, say, the 30th percentile parent.

SF versions:

"I Always Do what Teddy Says" (1964), by Harry Harrison.[1]

"A Young Lady's Illustrated Primer", in Neil Stephenson's The Diamond Age.

[1] https://archive.org/details/bestofharryharri0000harr_z2p6

blobbers · 16h ago
Can someone link the actual products they're talking about? ChatGPT isn't exactly great at forming emotional bonds, but I could see some other app doing this.
sandinmyjoints · 16h ago
> We conducted extensive research on social AI companions as a category, and specifically evaluated popular social AI companion products including Character.AI, Nomi, Replika, and others, testing their potential harm across multiple categories.
K0balt · 16h ago
You’d be surprised. I think it varies with the update, but I’ve often suspected that it has been optimized for role-play at some level. As OAI looks for mass-market fit I expect this to continue.
2099miles · 16h ago
Character ai is the one I heard about most.
2099miles · 16h ago
I feel like this is one of the most obvious takes out there but simultaneously parents are crazily unaware of it. Parents need to be more aware of ai companions and Roblox and how bad they are.
autoexec · 16h ago
Parents should be more aware of how terrible Roblox is generally. It's the kind of product that should be regulated out of existence.
whatshisface · 16h ago
It's high time we had a Minecraft Bill.
autoexec · 15h ago
I don't know what Minecraft is like these days now that microsoft has their hands in it, but when I played it Minecraft didn't have online gambling, microtransactions, sexual predators, child labor/exploitation, advertisements, brand ambassadors and celebrities manipulating kids, extremist propaganda, or any of the other harmful things roblox targets children with and exposes them to.

If minecraft is just like roblox now then sure, I'd be glad to back a bill to regulate that out of existence too.

lemoncookiechip · 16h ago
Honestly, I think they pose a far bigger risk to some adults. Adult's have a harder time making friends, in changing themselves when stuck in a loop, and loneliness is growing exponentially in big cities.

Of course children are children, and adults are responsible for their own choices.

Btw, I like generative AI and LLMs, I'm not trying to say "ban it" or "regulate it", just pointing out that lonely adults are a very real thing, and some of them can and will get stuck in this, the same way they can and do get stuck in other online hobbies.

ohso4 · 16h ago
How exactly do they think that parents can ban it? You can just ask ChatGPT to become a social companion.
whatshisface · 16h ago
>How exactly do they think that parents can ban it?

Like this:

"You're allowed to walk to your friend's house, but don't suddenly sprint out into the street."

Or,

"You're allowed to talk to the librarian, but not the guy who stands around outside with a paper bag in his hands."

Or maybe,

"You're allowed to put things in the microwave, but not metal utensils."

tossandthrow · 16h ago
You regulate it - easy and simple. Simply say that it is illegal to provide a social companion to people below the age Og 18
lenerdenator · 16h ago
And, y'know, actually enforce the regulation.
tossandthrow · 15h ago
I don't think it is wise to get too caught up in enforcement.

- we don't want a society based on control.

Add enforcement proportionate to the risk it posses.

lenerdenator · 13h ago
> - we don't want a society based on control.

Bit late, or not, depending on how you look at it.

spyrja · 16h ago
Can't say I see this trend declining any time soon. People seem to find affirmation (and in some sense validation) interacting with these LLM's. Provided the AI in question is well-aligned that shouldn't be much of a concern. Not much different from talking to a friend/therapist for emotional support, is it?
johnea · 13h ago
</sarcasm>

Sure, if they just make one with the guiding principle's of the pink, hello kitty, assault rifle, it'll be great for the kids!

</serious>

All commercial products will be designed to maximize revenue to shareholders, no other factors will ever be considered.

Any deviation for this path will lead to shareholder lawsuits alleging failure to uphold financial fiduciary responsibilities.

waffletower · 16h ago
Is everyone subscribing to the paywall to read a solitary cross-indexed article? Or are most of us commenting on the paragraph we are allowed to see?
palmotea · 14h ago
"Social AI companions are the next frontier in EduTech, and should be welcomed with hope and optimism," said the VC. "Our mission is to change the world for the better, and anyone in our way is evil Luddite trying to hurt you."
photochemsyn · 16h ago
I can see educational AI companions as workable in narrow contexts, eg a model fine-tuned on Paul Erdos and Martin Gardner and similar could be great for helping students work through math problem sets.

You'd probably want it to reject questions on religion and politics and human relationships to avoid the furious parental outrage, though. Narrow, well-defined contexts only. Even so some kids would come up with jailbreak strategies.

vincnetas · 17h ago
cant read the article without subscription. Is this ok with HN guidelines?
duskwuff · 16h ago

    document.querySelector("#user-plus-gate").remove()
    document.body.classList.remove("csm-premium-gated")
There's also a pair of more comprehensive reports (linked from the main page) on:

Social AI companions in general: https://www.commonsensemedia.org/sites/default/files/pug/csm...

Character.AI in particular: https://www.commonsensemedia.org/sites/default/files/pug/csm...

blobbers · 16h ago
this guy javascript debug consoles.
bpodgursky · 16h ago
I mean let's be real, they pose unacceptable risks to everyone. But in the west we only have strong societal norms around protecting children from themselves.
autoexec · 16h ago
Exactly. The posted link is unreadable, but other stories on the topic (for example: https://www.cnn.com/2025/04/30/tech/ai-companion-chatbots-un...) don't give me any reason to think that they are safe for adults. Adults are just slightly/somewhat better able to handle the hazardous material.
whatshisface · 16h ago
Do chatbots that pretend to be anime main characters really pose risks to anyone? Really? You all know that there are real things in the world like toxic waste dumps and human trafficking right?
thot_experiment · 15h ago
They pose risks to people in the same way porn, gambling or drugs pose a risk to people. We should as a society generally err on the side of being permissive with this stuff while providing the tools necessary for people to be safe.
Der_Einzige · 16h ago
Lonely young people are supposed to suffer for their sin of being unacceptable to the masses. We shouldn't let future incel types be allowed to find a social outlet because it short-circuits the "nudge" that makes them "self improve".

This is basically what the anti-AI as social companion crowd believes.

Actually, nerdy or autistic basement dwelling people are not "bad" and often do not deserve the social scorn they get. It's good that we can short-circuit the "need" for social interaction, especially with these kind of companions.

All this pearl clutching because one kid doing NSFW chats with a Danerys Targayn chatbot on character.ai got some media attention after committing suicide.

palmotea · 14h ago
> Lonely young people are supposed to suffer for their sin of being unacceptable to the masses.

The solution to that problem is not push fake e-friends on them. That's like "Feeling sad? Try heroin, it will make you feel good!"

And honestly, your take sounds like the kind of pseudo-empathetic sales pitch of someone trying to push a technology, and block things that could stand in the way of making money from its adoption.

sweetheart · 16h ago
One can believe that all people are deserving of love and friendship regardless of who they are or what they've done, and simultaneously believe that replacing social interaction with AI is generally a net harm for any/everyone. No one is bad because they want social stimulation from an AI, but I think it reinforces damaging norms that will leave us all worse off.
baggy_trough · 16h ago
Oh look, it's the people that want to ban the sales of violent video games to minors.
autoexec · 16h ago
Do you have a source for that? They seem to take a pretty reasonable approach here: https://www.commonsensemedia.org/articles/whats-the-impact-o...
baggy_trough · 16h ago
Read the Wikipedia article on them. It's a large section.
imachine1980_ · 16h ago
its not, it like saying microtransaccions are bad, they are, they predate on you and if you get hookup as teen , you probably gona get addicted. https://automaton-media.com/en/news/almost-19-of-japanese-pe...
bn-l · 16h ago
This is totally, totally different.
doright · 16h ago
I've wondered why, and what societal shifts in the world led us here. Maybe the "violent video games cause harm" narrative didn't work out in practice because the types of console video games that tend to become popular don't have the harmful engagement elements being talked about now? Propaganda as a concept has existed far longer than video games. But mere depictions of violence don't incite behavioral change in the way that "social optimization" elements do?

(Example: treating lootbox items not as bits of fictional lore the player's hero character finds in mythical dungeons, but as a set of items the person playing has invested their real-world shillings into according to predefined economic rules set by the designers, such that their livelihood becomes enmeshed with the game world)

SirFatty · 16h ago
You say that like it's a bad thing.
baggy_trough · 15h ago
Yes.

No comments yet

emorning3 · 16h ago
Well, I think this is a really good point.

I'm not in favor of banning games but I really cant discern the difference between a playing a game and interacting with an AI.

nickthegreek · 15h ago
In the near future I could easily imagine most games involving you interacting with an AI.
emorning3 · 16h ago
I just noticed that all my interaction with HN always takes place at the bottom of the page :-)