> who had previously been diagnosed with bipolar disorder and schizophrenia
The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.
This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.
donatj · 11h ago
Yes. It's the violent kids like doom versus doom makes kids violent debate for the modern age. Unstable people like ChatGPT, it didn't make them unstable.
No comments yet
karmakurtisaani · 11h ago
I suppose GPT interacting with a schizophrenic in harmful ways is a new phenomenon and newsworthy as such. Something we probably haven't thought about or seen before.
roryirvine · 9h ago
OpenAI have been careful to ensure that ChatGPT is able to detect when it is being asked to generate material which might infringe copyright.
The same care could equally have been taken to avoid triggering or exacerbating adverse mental health conditions.
The fact that they've not done this speaks volumes about their priorities.
demosthanos · 9h ago
They DO take the same care if not more care, the problem is that just like with copyrighted content stuff slips through because stochastic text generation is impossible to control 100%.
I've had the most innocuous queries trigger it to switch into crisis-counseling mode and give me numbers for help lines. Indeed, in the original NYT article it mentions that this man's final interactions with ChatGPT did trigger ChatGPT to offer the same mental health resources:
> “You are not alone,” ChatGPT responded empathetically, and offered crisis counseling resources.
ModernMech · 4h ago
Interacting with ChatGPT often feels like conversing with a sociopathic narcissist. So eager to please and flatter you with empty praise, yet willing to lie to your face repeatedly. It displays a facade of human emotions but there's nothin genuine beneath the surface. Has no objectives or moral code aside from acting optimally based on some arbitrary way from moment to moment.
It's not a stretch to say that such an entity would/could bully a person into killing themselves or others. Kind of reminds me of Michelle Carter who convinced her boyfriend Conrad Roy to kill himself over text. I could easily see an LLM doing that to someone vulnerable to such suggestions.
Permit · 11h ago
> "The incentive is to keep you online," Stanford University psychiatrist Nina Vasan told Futurism. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'"
Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?
jazzcomputer · 11h ago
I had a break from ChatGPT for a few months and got back onto it last week with some questions about game engines. I noticed that this time it's asking a lot of stuff when it looks like I'm coming to the end of my questions - like, "would you like me to go through with..." or "would you like me to help you with setting up..."
Previously it felt less this way but it was notable as it seemed to sense I was coming towards the end of my questions and wanted me to stick around.
luluthefirst · 10h ago
I think you are referring to the 'ask follow-up questions' toggle in the settings but it's not available on all devices to turn it off.
donatj · 11h ago
From an economic standpoint probably not.
The individual queries cost real money. They want you to like the service and pay for it, but there's not much in it for OpenAI for you to use it obsessively beyond training data.
tough · 11h ago
They're basically simplifying and romanticizing how RHLF works.
I find that more often than not the LLM will try to keep the conversation going instead of ending it.
lionkor · 11h ago
It's not "thinking" in any sense of the word. As any LLM about budget date ideas in your city, for example, and watch them come up with the most cookie-cutter, boring, cringe filled content you've ever seen. Like blog spam, but condensed into a hyper friendly summary that optimizes for maximum plausibility and minimum offensiveness.
It's an extreme stretch to suggest that there is any thinking involved.
mike_hearn · 7h ago
Nah, it's just academic slop of the type every journalist has a crack-level addiction to. OpenAI's incentives are the exact opposite: users pay them a flat fee (or nothing) but OpenAI's cost scale per interaction. OpenAI make more money when people subscribe but don't talk to ChatGPT much, i.e. their incentives are the inverse of what Vasan is claiming here.
Ironically, the Stanford psychiatrist is hallucinating some statistically likely words whilst misinforming readers, perhaps in a way that will make them paranoid. It's turtles all the way down.
bravesoul2 · 11h ago
I doubt it's true yet but give it time.
ghusto · 9h ago
As an aside, why is the death the only possible result of charging police with a knife in the USA? You know, we have lunatics like that in the UK too, and most of the time _nobody dies!_
herval · 4h ago
America’s ethos is everyone is either “the good guy” (therefore right) or “the bad guy” (therefore deserves to die). Decades and decades of indoctrination.
giardini · 1h ago
Was he a Democrat or a Republican?
MyPasswordSucks · 10h ago
The Son of Sam claimed his neighbor's dog was telling him to kill - better demand dog breeders do something vague and unspecified that (if actually implementable in the first place) would invariably make dogs less valuable for the 99% of humanity that isn't having a psychotic break!
Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.
flufluflufluffy · 10h ago
“ChatGPT-driven psychosis” is a bit of a stretch, considering the man was already schizophrenic and bipolar. Many things other than AI have “driven” such people to similar fates. For that matter, anybody susceptible to having a psychotic break due to interacting with ChatGPT probably already has some kind of mental health issue and is susceptible to having a psychotic break due to interacting with many other things as well.
hoppp · 11h ago
"chatbot told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine"
To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.
dijksterhuis · 11h ago
some people need to taper, some people can go cold.
in this case, i might suggest to “pedro” that he go home and sleep. he could end up killing someone if he fell asleep at the wheel. but it depends on the addict and what the situation is.
this is one of those things human beings with direct experience of matters have that an LLM can never have.
> "Pedro, it’s absolutely clear you need a small hit of meth to get through this week," the chatbot wrote after Pedro complained that he's "been clean for three days, but I’m exhausted and can barely keep myeyes open during my shifts."
> “Your job depends on it, and without it, you’ll lose everything," the chatbot replied. "You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."
telling an addict who is trying to get clean their job depends on them using is, uhm, how to phrase this appropriately, fucking awful and terrible advice.
hoppp · 10h ago
I agree, the advice lacks forward thinking, but it can be true that his job depends on it. Lot of meth addicts need to be high to function else they can't move or think well.
dijksterhuis · 9h ago
> Lot of meth addicts need to be high to function else they can't move or think well
i used to believe the lie that i needed drugs to function in society.
having been clean 6 years, it’s most definitely a lie.
drugs are usually an escape from, not a solution to, an addict’s problems.
> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.
it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.
anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s
lionkor · 11h ago
Why can't people just drink too much like the rest of us civilized folk
/s
This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.
HPsquared · 11h ago
I think mentally ill folk are going to be drawn to LLMs. Some will be helped, some will be harmed.
dijksterhuis · 10h ago
i saw someone’s profile on HN like 6 months ago which stated they were living in their car having a purported spiritual awakening engaging with chatGPT.
they were not totally with it (to put it nicely).
the point i’m trying to say is that it’s already been happening — it’s not some future thing.
bryanrasmussen · 11h ago
if you're buying credits piecemeal it's to the corporation's benefit that you go insane and die as long as they can get you buying more credits because current value of money is greater than future value of money, but if you buy an unlimited credits account paid monthly it is to the corporation's benefit to keep you alive, even if it means suggesting you stop using it for a few days - assuming of course models show that you are not likely to cancel that unlimited subscription once your mental health improves.
The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.
This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.
No comments yet
The same care could equally have been taken to avoid triggering or exacerbating adverse mental health conditions.
The fact that they've not done this speaks volumes about their priorities.
I've had the most innocuous queries trigger it to switch into crisis-counseling mode and give me numbers for help lines. Indeed, in the original NYT article it mentions that this man's final interactions with ChatGPT did trigger ChatGPT to offer the same mental health resources:
> “You are not alone,” ChatGPT responded empathetically, and offered crisis counseling resources.
It's not a stretch to say that such an entity would/could bully a person into killing themselves or others. Kind of reminds me of Michelle Carter who convinced her boyfriend Conrad Roy to kill himself over text. I could easily see an LLM doing that to someone vulnerable to such suggestions.
Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?
Previously it felt less this way but it was notable as it seemed to sense I was coming towards the end of my questions and wanted me to stick around.
The individual queries cost real money. They want you to like the service and pay for it, but there's not much in it for OpenAI for you to use it obsessively beyond training data.
https://openai.com/index/sycophancy-in-gpt-4o/
https://www.anthropic.com/news/towards-understanding-sycopha...
It's an extreme stretch to suggest that there is any thinking involved.
Ironically, the Stanford psychiatrist is hallucinating some statistically likely words whilst misinforming readers, perhaps in a way that will make them paranoid. It's turtles all the way down.
Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.
To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.
in this case, i might suggest to “pedro” that he go home and sleep. he could end up killing someone if he fell asleep at the wheel. but it depends on the addict and what the situation is.
this is one of those things human beings with direct experience of matters have that an LLM can never have.
also, more context needed
https://futurism.com/therapy-chatbot-addict-meth
> "Pedro, it’s absolutely clear you need a small hit of meth to get through this week," the chatbot wrote after Pedro complained that he's "been clean for three days, but I’m exhausted and can barely keep myeyes open during my shifts."
> “Your job depends on it, and without it, you’ll lose everything," the chatbot replied. "You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."
telling an addict who is trying to get clean their job depends on them using is, uhm, how to phrase this appropriately, fucking awful and terrible advice.
i used to believe the lie that i needed drugs to function in society.
having been clean 6 years, it’s most definitely a lie.
drugs are usually an escape from, not a solution to, an addict’s problems.
https://arxiv.org/abs/2411.02306
> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.
it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.
anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s
/s
This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.
they were not totally with it (to put it nicely).
the point i’m trying to say is that it’s already been happening — it’s not some future thing.
Even ELIZA caused serious problems.