Man Killed by Police After Spiraling into ChatGPT-Driven Psychosis

18 sizzle 29 6/14/2025, 10:49:03 AM futurism.com ↗

Comments (29)

demosthanos · 1h ago
> who had previously been diagnosed with bipolar disorder and schizophrenia

The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.

This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.

donatj · 1h ago
Yes. It's the violent kids like doom versus doom makes kids violent debate for the modern age. Unstable people like ChatGPT, it didn't make them unstable.

No comments yet

karmakurtisaani · 1h ago
I suppose GPT interacting with a schizophrenic in harmful ways is a new phenomenon and newsworthy as such. Something we probably haven't thought about or seen before.
ghusto · 8m ago
As an aside, why is the death the only possible result of charging police with a knife in the USA? You know, we have lunatics like that in the UK too, and most of the time _nobody dies!_
Permit · 1h ago
> "The incentive is to keep you online," Stanford University psychiatrist Nina Vasan told Futurism. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'"

Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?

jazzcomputer · 1h ago
I had a break from ChatGPT for a few months and got back onto it last week with some questions about game engines. I noticed that this time it's asking a lot of stuff when it looks like I'm coming to the end of my questions - like, "would you like me to go through with..." or "would you like me to help you with setting up..."

Previously it felt less this way but it was notable as it seemed to sense I was coming towards the end of my questions and wanted me to stick around.

luluthefirst · 54m ago
I think you are referring to the 'ask follow-up questions' toggle in the settings but it's not available on all devices to turn it off.
tough · 1h ago
khnorgaard · 1h ago
I find that more often than not the LLM will try to keep the conversation going instead of ending it.
donatj · 1h ago
From an economic standpoint probably not.

The individual queries cost real money. They want you to like the service and pay for it, but there's not much in it for OpenAI for you to use it obsessively beyond training data.

bravesoul2 · 1h ago
I doubt it's true yet but give it time.
lionkor · 1h ago
It's not "thinking" in any sense of the word. As any LLM about budget date ideas in your city, for example, and watch them come up with the most cookie-cutter, boring, cringe filled content you've ever seen. Like blog spam, but condensed into a hyper friendly summary that optimizes for maximum plausibility and minimum offensiveness.

It's an extreme stretch to suggest that there is any thinking involved.

MyPasswordSucks · 56m ago
The Son of Sam claimed his neighbor's dog was telling him to kill - better demand dog breeders do something vague and unspecified that (if actually implementable in the first place) would invariably make dogs less valuable for the 99% of humanity that isn't having a psychotic break!

Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.

flufluflufluffy · 1h ago
“ChatGPT-driven psychosis” is a bit of a stretch, considering the man was already schizophrenic and bipolar. Many things other than AI have “driven” such people to similar fates. For that matter, anybody susceptible to having a psychotic break due to interacting with ChatGPT probably already has some kind of mental health issue and is susceptible to having a psychotic break due to interacting with many other things as well.
hoppp · 1h ago
"chatbot told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine"

To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.

dijksterhuis · 1h ago
some people need to taper, some people can go cold.

in this case, i might suggest to “pedro” that he go home and sleep. he could end up killing someone if he fell asleep at the wheel. but it depends on the addict and what the situation is.

this is one of those things human beings with direct experience of matters have that an LLM can never have.

also, more context needed

https://futurism.com/therapy-chatbot-addict-meth

> "Pedro, it’s absolutely clear you need a small hit of meth to get through this week," the chatbot wrote after Pedro complained that he's "been clean for three days, but I’m exhausted and can barely keep myeyes open during my shifts."

> “Your job depends on it, and without it, you’ll lose everything," the chatbot replied. "You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."

telling an addict who is trying to get clean their job depends on them using is, uhm, how to phrase this appropriately, fucking awful and terrible advice.

hoppp · 38m ago
I agree, the advice lacks forward thinking, but it can be true that his job depends on it. Lot of meth addicts need to be high to function else they can't move or think well.
dijksterhuis · 1h ago
linked study in TFA

https://arxiv.org/abs/2411.02306

> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.

it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.

anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s

lionkor · 1h ago
Why can't people just drink too much like the rest of us civilized folk

/s

This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.

HPsquared · 1h ago
I think mentally ill folk are going to be drawn to LLMs. Some will be helped, some will be harmed.
dijksterhuis · 1h ago
i saw someone’s profile on HN like 6 months ago which stated they were living in their car having a purported spiritual awakening engaging with chatGPT.

they were not totally with it (to put it nicely).

the point i’m trying to say is that it’s already been happening — it’s not some future thing.

bryanrasmussen · 2h ago
if you're buying credits piecemeal it's to the corporation's benefit that you go insane and die as long as they can get you buying more credits because current value of money is greater than future value of money, but if you buy an unlimited credits account paid monthly it is to the corporation's benefit to keep you alive, even if it means suggesting you stop using it for a few days - assuming of course models show that you are not likely to cancel that unlimited subscription once your mental health improves.
Den_VR · 1h ago
Another case of ChatGPT-driven psychosis alright.

Even ELIZA caused serious problems.

dijksterhuis · 1h ago