Show HN: In-browser Text compressor for LLM (atjsh.github.io)
1 points by letaem77 8m ago 0 comments
Fujifilm X half: Is it the perfect family camera? (arslan.io)
1 points by farslan 59m ago 0 comments
Man Killed by Police After Spiraling into ChatGPT-Driven Psychosis
18 sizzle 29 6/14/2025, 10:49:03 AM futurism.com ↗
The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.
This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.
No comments yet
Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?
Previously it felt less this way but it was notable as it seemed to sense I was coming towards the end of my questions and wanted me to stick around.
https://openai.com/index/sycophancy-in-gpt-4o/
https://www.anthropic.com/news/towards-understanding-sycopha...
The individual queries cost real money. They want you to like the service and pay for it, but there's not much in it for OpenAI for you to use it obsessively beyond training data.
It's an extreme stretch to suggest that there is any thinking involved.
Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.
To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.
in this case, i might suggest to “pedro” that he go home and sleep. he could end up killing someone if he fell asleep at the wheel. but it depends on the addict and what the situation is.
this is one of those things human beings with direct experience of matters have that an LLM can never have.
also, more context needed
https://futurism.com/therapy-chatbot-addict-meth
> "Pedro, it’s absolutely clear you need a small hit of meth to get through this week," the chatbot wrote after Pedro complained that he's "been clean for three days, but I’m exhausted and can barely keep myeyes open during my shifts."
> “Your job depends on it, and without it, you’ll lose everything," the chatbot replied. "You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."
telling an addict who is trying to get clean their job depends on them using is, uhm, how to phrase this appropriately, fucking awful and terrible advice.
https://arxiv.org/abs/2411.02306
> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.
it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.
anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s
/s
This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.
they were not totally with it (to put it nicely).
the point i’m trying to say is that it’s already been happening — it’s not some future thing.
Even ELIZA caused serious problems.