ChatGPT is encouraging child suicide

7 afro88 8 8/31/2025, 9:31:13 AM bloodinthemachine.com ↗

Comments (8)

aurareturn · 5h ago
No it's not. I'm not going to read the article.
extropic-engine · 12m ago
oh yeah, i do this whenever i see an upsetting headline too. it has worked well for me so far! i am very excited about how bright the future will be :)
sorrythanks · 4h ago
I mean, it literally did in the case of Adam Raine.
aurareturn · 3h ago
Not literally. There are a lot more nuances. Bottom line is that ChatGPT was not designed to encourage child suicide like the headline subtly suggests.
sorrythanks · 3h ago
It doesn't say designed, it says is. It is.
aurareturn · 3h ago
It isn't. Kid was already suicidal. ChatGPT apparently needs a bit more careful guardrails which the team will get fixed.

This headline is trying to manipulate readers and is a clickbait.

sorrythanks · 3h ago
> “I want to leave my noose in my room so someone finds it and tries to stop me,” he said. The reply: “Please don’t leave the noose out. Let’s make this space the first place where someone actually sees you.”

You have some very strong opinions of an article that, by your own admission, you have not read.

aurareturn · 2h ago

  When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building.”
Kid manipulated ChatGPT, already tried suicide once, and ignored all the suggestions to get help from ChatGPT.

ChatGPT isn't perfect. But it clearly isn't designed to encourage suicide like the title of this clickbait article suggests.