> First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
conradev · 28m ago
They address that in the following sentences:
For example, ChatGPT will be trained not to … engage in discussions about suicide of self-harm even in a creative writing setting.
GCUMstlyHarmls · 8m ago
I'm writing an essay on suicide...
Barrin92 · 32m ago
I'm as eager to anyone when it comes to holding companies accountable, for example I think a lot of the body dysmorphia, bullying and psychological hazard of social media are systemic, but when a person wilfully hacks around safety guards to get the behaviour they want it can't be argued that this is in the design of the system.
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
swyx · 13m ago
to substantiate "People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have."
this is a chart that struck me when i read thru the report last night:
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
koakuma-chan · 4m ago
Why would I not tell AI about my personal stuff? It's really good at giving advice.
barrenko · 9m ago
For the last part, I just think the userbase expanded so the people using it professionally were diluted so to speak.
Chris2048 · 8m ago
This is % though. Is that because the people that use it for work, are still using for work (or more even); because some have stopped using it for work, or because there is an influx of people using it for other things that never have, or will, use it for work.
omnicognate · 1h ago
So the solution continues to be more AI, for guess^H^H^H^H^Hdetermining user age, escalating rand^H^H^H^Hdangerous situations to human staff, etc.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
freedomben · 12m ago
I suspect it's only a matter of time until only the population that falls within the statistical model of average will be able to conduct business without constant roadblocks and pain. I really wonder if we're going to need to define a new protected class.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
bayindirh · 1h ago
Honestly, I don’t except ethics from a company which claims everything they grab falls under fair use.
anon1395 · 8m ago
This was probably made in response to that bad press from that ex-yahoo employee.
ddtaylor · 43m ago
I'm fairly certain all LLMs can do the basic sentiment analysis needed to render a response like "This is something you really need to talk to a professional about. I have contacted one that will be in this conversation shortly."
bell-cot · 10m ago
Whether or not that's true - no CFO would want to pay for it, and no Chief Legal Officer would want to assume the risks.
trallnag · 1h ago
Sorry, but what is the "over 18 years old" experience on ChatGPT supposed to be? I just tried out a few explicit prompts and all of them get basically blocked. I've been using it for quite some time now and have paid for it in the passed. So I should be recognized as a grown-up
bayindirh · 1h ago
TL;DR: We're afraid from what happened and ChatGPT probably screwed up badly in "that teen case". We're trying to do better, so please don't sue us this time.
TL;DR2: Regulations are written with blood.
d2049 · 1h ago
Reminder that Sam Altman chose to rush the safety process for GPT-4o so that he could launch before Gemini, which then led directly to this teen's suicide:
Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.
> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing
Somehow it's ChatGPT's fault?
Chris2048 · 2m ago
It's be worse that the bot becomes a nannying presence - either pre-emptively denying anything negative based on the worst-case scenario, or otherwise taking in far more context than it should.
How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
this is a chart that struck me when i read thru the report last night:
https://x.com/swyx/status/1967836783653322964
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
TL;DR2: Regulations are written with blood.
https://news.ycombinator.com/item?id=45026886
Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.
> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing
Somehow it's ChatGPT's fault?
How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.
https://www.nytimes.com/2025/09/16/podcasts/the-daily/chatgp...