First Murder-Suicide Case Associated with AI Psychosis

18 Liwink 16 9/1/2025, 1:37:20 AM gizmodo.com ↗

Comments (16)

judge123 · 2m ago
This is horrifying, but I feel like we're focusing on the wrong thing. The AI wasn't the cause; it was a horrifying amplifier. The real tragedy here is that a man was so isolated he turned to a chatbot for validation in the first place.
mensetmanusman · 2h ago
Having dealt with near and distant family psychosis on more than one occasion…

The truth is that the most random stuff will set them off. In one case, a patient would find reinforcement on obscure YouTube groups of people predicting the doom of the future.

Maybe the advantage of AI over YouTube psychosis groups is that AI could at least be trained to alert the authorities after enough murder/suicide data is gathered.

Avicebron · 2h ago
I'd prefer using chatgpt or claude or whatever doesn't mean someone gets swatted when they get heated about "kill this damn thread"
recursive · 1h ago
I doubt this is all going to end up according to any of our preferences.
ChrisMarshallNY · 1h ago
I have very close family with schizoaffective disorder.

This story is pretty terrifying to me. I could easily see them getting led into madness, exactly as the story says.

garyfirestorm · 1h ago
Minority Report coming to life
IIAOPSW · 1h ago
More like plurality report
DaveZale · 2h ago
why is this stuff legal?

there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."

tomasphan · 6m ago
Should we really demand this of every AI chat application to potentially avert a negative response from the tiny minority of users that blindly follow what they’re told? Who is going to enforce this? What if I host a private AI model for 3 users. Do I need that and what is the punishment for non compliance? You see where I’m going with this. The problem with your sentiment is that as soon as you draw a line it must be defined in excruciating detail or you risk unintended consequences.
lukev · 2h ago
That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers.

The problem is calling it "AI" to start with. This (along with the chat format itself) primes users to think of it as an entity... something with care, volition, motive, goals, and intent. Although it can emulate these traits, it doesn't have them.

Chatting with a LLM is entering a one-person echo-chamber, a funhouse mirror that reflects back whatever semantic region your initial query put it. And the longer you chat, the deeper that rabbit hole goes.

threatofrain · 52m ago
It's not a one-person echo-chamber though, it also carries with it the smell and essence of a large corpus of human works. That's why it's so useful to us, and that's why it carries so much authority.
jvanderbot · 1h ago
Well, hate to be that guy, but surgeons general warnings coincided with significant reduction in smoking. We've just reached the flattening of that curve. After decades of declines.

It's hard to believe that a prominent well - worded warning would do nothing but that's not to say it'll be effective for this.

ianbicking · 1h ago
The surgeon general warning came along with a large number of other measures to reduce smoking. If that warning had an effect, I would guess that effect was to prime the public for the other measures and generally to change consensus.

BUT, I think it's very likely that the surgeon general warning was closer to a signal that consensus had been achieved. That voice of authority didn't actually _tell_ anyone what to believe, but was a message that anyone could look around and use many sources to see that there was a consensus on the bad effects of smoking.

lukev · 1h ago
Well if that's true, by all means.

But saying "This AI system may cause harm" reads to me as similar to saying "This delightful substance may cause harm."

The category error is more important.

crawsome · 1h ago
A tech industry veteran? You would think they could realize it's a disingenuous exchange between him and the AI, but nobody is immune to mental illness.
StilesCrisis · 1h ago
It says he worked in marketing, so not necessarily super tech savvy.