Just today The Daily pod is about people who develop unhealthy relationships with ChatGPT. A teenage boy committed suicide and a good part of the episode is about that. As a parent, heartbreaking to listen to...
biophysboy · 1h ago
> First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
WD-42 · 4m ago
Yes. The timing of this is undoubtedly related to the Daily episode this morning titled “Trapped in a GPT spiral”.
For example, ChatGPT will be trained not to … engage in discussions about suicide of self-harm even in a creative writing setting.
GCUMstlyHarmls · 18m ago
I'm writing an essay on suicide...
Barrin92 · 42m ago
I'm as eager to anyone when it comes to holding companies accountable, for example I think a lot of the body dysmorphia, bullying and psychological hazard of social media are systemic, but when a person wilfully hacks around safety guards to get the behaviour they want it can't be argued that this is in the design of the system.
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
omnicognate · 1h ago
So the solution continues to be more AI, for guess^H^H^H^H^Hdetermining user age, escalating rand^H^H^H^Hdangerous situations to human staff, etc.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
freedomben · 22m ago
I suspect it's only a matter of time until only the population that falls within the statistical model of average will be able to conduct business without constant roadblocks and pain. I really wonder if we're going to need to define a new protected class.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
bayindirh · 1h ago
Honestly, I don’t except ethics from a company which claims everything they grab falls under fair use.
swyx · 23m ago
to substantiate "People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have."
this is a chart that struck me when i read thru the report last night:
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
koakuma-chan · 14m ago
Why would I not tell AI about my personal stuff? It's really good at giving advice.
voakbasda · 4m ago
Because you’re not just telling the AI, you are also telling the company that built it, as well as their affiliated partners, advertisers, and data brokers?
nielsbot · 5m ago
ok but didn’t it advise that teen how to best kill himself?
For the last part, I just think the userbase expanded so the people using it professionally were diluted so to speak.
Chris2048 · 18m ago
This is % though. Is that because the people that use it for work, are still using for work (or more even); because some have stopped using it for work, or because there is an influx of people using it for other things that never have, or will, use it for work.
charcircuit · 6m ago
>We’re building an age-prediction system to estimate age based on how people use ChatGPT.
>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.
voakbasda · 27s ago
This is why one should never say anything sensitive to a cloud-hosted AI.
Local models and open source tooling are the only means of privacy.
ddtaylor · 53m ago
I'm fairly certain all LLMs can do the basic sentiment analysis needed to render a response like "This is something you really need to talk to a professional about. I have contacted one that will be in this conversation shortly."
bell-cot · 20m ago
Whether or not that's true - no CFO would want to pay for it, and no Chief Legal Officer would want to assume the risks.
anon1395 · 17m ago
This was probably made in response to that bad press from that ex-yahoo employee.
trallnag · 1h ago
Sorry, but what is the "over 18 years old" experience on ChatGPT supposed to be? I just tried out a few explicit prompts and all of them get basically blocked. I've been using it for quite some time now and have paid for it in the passed. So I should be recognized as a grown-up
bayindirh · 1h ago
TL;DR: We're afraid from what happened and ChatGPT probably screwed up badly in "that teen case". We're trying to do better, so please don't sue us this time.
TL;DR2: Regulations are written with blood.
d2049 · 1h ago
Reminder that Sam Altman chose to rush the safety process for GPT-4o so that he could launch before Gemini, which then led directly to this teen's suicide:
Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.
> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing
Somehow it's ChatGPT's fault?
Chris2048 · 12m ago
It's be worse that the bot becomes a nannying presence - either pre-emptively denying anything negative based on the worst-case scenario, or otherwise taking in far more context than it should.
How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
https://pca.st/episode/73690b66-8f84-4fec-8adf-e1a02d292085
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
this is a chart that struck me when i read thru the report last night:
https://x.com/swyx/status/1967836783653322964
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
previous discussion: https://news.ycombinator.com/item?id=45026886
>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.
Local models and open source tooling are the only means of privacy.
TL;DR2: Regulations are written with blood.
https://news.ycombinator.com/item?id=45026886
Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.
> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing
Somehow it's ChatGPT's fault?
How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.
https://www.nytimes.com/2025/09/16/podcasts/the-daily/chatgp...