With AI chatbots, Big Tech is moving fast and breaking people

41 rntn 51 8/25/2025, 6:15:41 PM arstechnica.com ↗

Comments (51)

ACCount37 · 2h ago
I imagine that mainstream assistant chatbots doing this is a brief occurrence. The next generations of AI systems will be trained to be better about not affirming psychotic users.

But it goes to show just how vulnerable a lot of humans are.

By all accounts, GPT-4o isn't malicious. It has no long term plans - it just wants the user to like it. It still does this kind of thing to people. An actual malicious AI capable of long term planning would be able to do that, and worse.

One of the most dangerous and exploitable systems you can access online is a human.

Night_Thastus · 1h ago
>it just wants the user to like it

It uses words and phrases most commonly associated with other text that humans have labeled as "helpful", "friendly", etc. That sort of thing.

That's different than wanting. It's text probabilities in a box. It can't want anything.

I am normally not a stickler, but in this case the distinction matters.

ACCount37 · 1h ago
The distinction is at best meaningless. And, at worst, actively harmful for understanding the behavior of those systems.

We already know that you can train an "evil" AI by fine-tuning a "normal" AI on sequences of "evil" numbers. How the fuck does that even work? It works because the fine-tuning process shifts the AI towards "the kind of AI that would constantly generate evil numbers when asked to generate any numbers".

And what kind of AI would do that? The evil kind.

AI "wants", "preferences" and even "personality traits" are no less real than configuration files or build scripts. Except we have no way of viewing or editing them directly - but we know that they can be adjusted during AI training, sometimes in unwanted ways.

An AI that was fried with RL on user preference data? It, in a very real way, wants to flatter and affirm the user at every opportunity.

ramesh31 · 1h ago
>"By all accounts, GPT-4o isn't malicious. It has no long term plans - it just wants the user to like it. It still does this kind of thing to people. An actual malicious AI capable of long term planning would be able to do that, and worse."

That's just what a malicious AI would want you to think...

But seriously, there doesn't need to be any "intent" for something to be harmful. A deadly virus has no will or agency, and particular individuals may be completely immune to it. But released on a population the results are catastrophic. We have to accept that perhaps for the first time ever we've created something we can't fully comprehend or control, and that regardless of our best wishes it will be harmful in ways we could never have imagined. If it were as simple as prompting the next model against this stuff, it would have already been done.

gjsman-1000 · 2h ago
This is one item on my long list of flaws with AI.

OpenAI thinks that as long as you aren't using dating terminology or asking to build bombs, you're probably safe.

- What does having an unquestioning mirror that is always gracious and reflects your own biases back at you, do to the mind in the long term? We don't know.

- How do you know, that after 2-3 years of this use, without any particular warning, it will not suddenly escalate into a delusion spiral? We don't know.

- Does this affect childhood development to have no pushback on their personal feelings and opinions? We don't know.

- Does this affect people's very willingness to learn, and to endure intellectual struggle? We don't know.

- Does this affect long-term retention, as losing even an imperceptible 2% of your knowledge every month, is 48% lost after 2 years? We don't know.

- How does this affect a population of which 27% have experienced a mental health issue in the last year, and of which 9% are experiencing depression? We don't know.

- Are all these problems solvable with any amount of education or warning labels, given that human nature has not changed and will not change? We don't know.

- Can this be prevented with any amount of AI guardrails or ethics training, or is it inherent to the structure of the system itself? We don't know.

- Will this eventually be considered so dangerous, that we need black-box warning labels similar to cigarettes and possibly social media? We don't know.

- Will this eventually be considered so insolvably risky for human cognition, that we outright restrain use without a license, either like driving at best, or asbestos at worst? We don't know.

- Will any employer be able to obtain insurance for AI use, if there is even a 0.2% documented chance that any employee could go psychotic from using it, completely at random? We don't know.

- Will said employees, if they do go psychotic from a work-mandated AI chatbot, successfully sue their employers for damages? We don't know.

cyberpunk · 1h ago
The latest South Park episode had a fairly funny bit about this whole thing, so I think we (society in general, I mean) are aware of this. I don't think there are any easy answers to prevent people getting sucked into these sort of problems though. I mean, look at how much people shared on social media back in its early days. Stories are coming out every week about how people have been waylaid by these properties.

Education I guess is the answer, but we don't even seem to be able to properly educate children about pornography and social media, let alone the subtleties of interacting with AIs..

gjsman-1000 · 1h ago
> Education I guess is the answer

If these are real problems, hell no, because education has never forced anyone to do anything right.

You can yell about the dangers of STDs until you are blue in the face, but 20% of the population carries one.

You can yell about the dangers of speeding just 5 MPH with graphic imagery in drivers ed, but 35% of adults go 10+ MPH over in residential areas every month.

The answer, quite literally, is possibly age verification, outright bans, or mandatory licensing. None of which are priced into the stock market.

However, any government will quickly realize that having up to a quarter of your population vulnerable to developing psychosis and mental problems (due to 27% of the population experiencing at least some form of mental illness just in the last year, making them presumably more vulnerable to AI) is absolutely literally financially impossible to allow freedom in experimenting with.

cyberpunk · 1h ago
So .... your argument is that we shouldn't educate children about STDs because some people ignore that?

So what then, ban sex? Good luck.

Education is really the only answer here. There are clearly challenges in doing it right but fucked if I don't think it'd be worse not to even try...

gjsman-1000 · 1h ago
You're acting like human nature, wanting the easy way out, is something that can be overcome by education.

It can't. Full stop. It needs to be a choice of the people involved, to do the hard good thing, unless it is otherwise imposed on them. We enjoy "freedom" in the sense that the good choice is often not imposed, because enough people make the good decision regardless. If AI successfully convinces enough people to not make that choice, despite all the lecturing on why you should make the proper choice, we're screwed unless we clamp down hard.

I do know that a society with the level of cognitive dependency that AI risks introducing, and is already introducing, literally cannot function without collapse. Do you want a society intellectually dependent on advanced manufacturing techniques in one of the most geopolitically at-risk regions of the world?

cbluth · 2h ago
I know someone that has "resurrected" a dead relative through an llm, and I've seen the nonsense on forums about dating an "ai boyfriend"...

Some people can't help themselves and don't use these tools appropriately.

MisterTea · 2h ago
My friend admitted to having political and religious arguments with chatgpt. They have mental issues which contributed.
beacon473 · 2h ago
What's wrong with using an LLM to learn about politics and religion?

I've found Claude to be an excellent tool to facilitate introspective psychoanalysis. Unlike most human therapists I've worked with, Claude will call me on my shit and won't be talked into agreeing with my neurotic fantasies (if prompted correctly).

Night_Thastus · 1h ago
Because unlike a human who can identify that some lines of reasoning are flawed or unhealthy, an LLM will very happily be a self-contained echo chamber that will write whatever you want with some nudging.

It can drive people further and further into their own personal delusions or mental health problems.

You may think it's being critical of you, but it's not. It's ultimately interacting you on your terms, saying what you want to hear when you want to hear it. That's not how therapy works.

42lux · 1h ago
No you are gaslighting yourself without recognizing it. That's what we are talking about.
iinnPP · 1h ago
It's impossible to gaslight yourself as it requires intention.
42lux · 1h ago
He wants to be called a good boy, so the LLM calls him a good boy. Since the LLM is a machine that does what you want, he's essentially doing it to himself. It might not be a conscious choice, but there's still intention behind it. Kein Herr im eigenen Haus. (No master in one's own house.) - Sigmund Freud. He was wrong about a lot of stuff but this is one thing that still stands.

It's called unconscious intention, and here's a pretty interesting paper that'll bring you up to speed: https://irl.umsl.edu/cgi/viewcontent.cgi?article=1206&contex...

iinnPP · 38m ago
I'm sure you can be unconsciously intent on things, but gaslighting is a unique concept. Here's the definition I am relying on: manipulate (someone) using psychological methods into questioning their own sanity or powers of reasoning.

In your provided example, the user is obviously not trying to manipulate someone into questioning their sanity, nor power of reasoning. Quite the opposite. Lying to themselves (your example) for sure.

42lux · 23m ago
gaslighting is the act of invalidating one’s own, true experience and yes you can do it yourself.

https://philpapers.org/rec/MCGAIG

https://www.psychologytoday.com/us/blog/emotional-sobriety/2...

ramesh31 · 1h ago
>My friend admitted to having political and religious arguments with chatgpt. They have mental issues which contributed.

To be fair these are probably the same people who would have been having these conversations with squirrels in the park otherwise.

gjsman-1000 · 1h ago
We don't actually know that. It's a nice stereotype, but there's no source to prove these people would've done it anyway.
deadbabe · 2h ago
Can you talk more about the resurrection? Did they train an LLM fine tune against as much written content as possible made by that person?
nullc · 2h ago
That might actually be interesting if there were enough content, something of the "Beta" level AI's in Alastair Reynolds' revelation space books.

But that isn't what I've seen done when people said they did that. Instead they just told ChatGPT a bit about the person and asked it to playact. The result was nothing like the person-- just the same pathetic ChatGPT persona, but in their confusion, grief, and vulnerability they thought it was a recreation of the deceased person.

A particularly shocking and public example is the Jim Acosta interview of the simulacra of a parkland shooting victim.

dingnuts · 2h ago
the good news is that this shit isn't sustainable. when investor funtime subsidies end, ain't nobody spending $2000/mo for an AI boyfriend.

"but it's getting cheaper and cheaper to run inference" they say. To which I say:

ok bud sure thing, this isn't a technology governed by Moore. We'll see.

fullshark · 2h ago
Mental health issues in the population are never going away, people using software tools to prey on those with issues is never going away. Arguably the entire consumer software industry preys on addiction and poor impulse control already.
dingnuts · 1h ago
yeah, I know, I'm addicted to this hellhole of a site. I hate it but I still open it every five minutes.

But that's not what I'm talking about, I'm talking specifically about people who've made a SoTA model their buddy, like the people who were sad when 4o disappeared. Users of character.ai. That sort of thing. It's going to get very, very expensive and provides very little value. People are struggling with rent. These services won't be able to survive, I hope, purely through causing psychosis in vulnerable people.

42lux · 1h ago
When I was at a game studio for a big MMORPG I had the valuable experience sitting next to the monetization team. It was a third grade MMO with gacha mechanics our whales spend 20-30k every month... for years.
tokai · 2h ago
People have spend much more on pig butchering scam boyfriends that don't even exist. I bet you could get some people to pay quit a lot to keep what they see as their significant other alive.
valbaca · 2h ago
> ain't nobody spending $2000/mo for an AI boyfriend

you haven't met the Whales (big spenders) in the loneliness-epidemic industry (e.g. OnlyFans and the like)

dingnuts · 1h ago
why would they stop paying for the attention of a real woman when the artificial alternative costs just as much?
evilduck · 32m ago
Reliability and consistency? Humans have bad days. Humans have needs and can't be available 24/7/365 for years on end. OF creators burn out or grow up or lose interest or cash out and retire.

It's not like the "real women" of OnlyFans are consistently real, or women. And there's some percentage that are already AI-by-proxy. There's definitely opportunity for someone to just skip the middleman.

JohnMakin · 1h ago
On OF specifically, you are much more likely to be spending money talking to a bot that sends you sexy pics and messages than a real human being.
AlexandrB · 1h ago
Both are fake. Do you think OnlyFans models are actually giving their customers their attention?
nullc · 2h ago
It doesn't require a particularly powerful AI because the human's own hope is doing the heavy lifting. 70B models run juust fine on hardware you can have sitting under your desk.
deadbabe · 2h ago
$2000? I see $4000/month minimum, roughly equivalent to what some typical Wall Street data feeds are priced at. It’s big business.
AstroBen · 56m ago
Every social media platform has engineered their product to be as addictive, time-sucking and manipulative as they can make it. I'm fully expecting AI to be treated the same, except much more potent

I suppose the only hope here is that its economic value comes mainly from its productive output (summarization, code, writing, text transformation whathaveyou)

datadrivenangel · 2h ago
And this is only going to get worse. The worst engagement hacking of social media but even more personalized.
wafflemaker · 2h ago
So far even the least* "ethical" companies (raising initial money by pretending to be open) don't use machine learning based on users psychological profiles to create perfect hypno-drug-dopamine-poison (sorry, but the English word for drugs doesn't really carry the deserved weight).

And Snapchat, TikTok, Facebook, YouTube, Instagram, more?.. are just it. They waste time worth $50-$100 just to make $0.05 in ads.

LLMs seem to be far from that.

* I might be ignorant of the real picture, please correct me if I'm wrong and not aware of really evil LLM companies.

Analemma_ · 2h ago
Wait what? Companies are absolutely doing this at scale, they're just laundering it through third-party content providers, who get seen or not seen based on what the user has lingered on in the past. If you haven't seen your relatives' Facebook feeds lately, they're close to 100% LLM-generated slop, which gradually gets tuned to the specific user based on what their eyeballs have lingered on in the past. TikTok isn't quite there yet but is trending in the same direction.

So it's platform partners plus filtered selection putting the tuned dopamine poison in front of people for the moment, but it's absolutely happening. And eventually the platform owners can and will cut out the middlemen.

david_shaw · 2h ago
I love playing with advancing technologies, and although I don't think LLM/Agentic AI is quite ready to change the world, one day soon it might be -- but the volume of individuals falling into AI-induced psychosis is astounding and horrifying.

For those of you who, thankfully, don't have personal experience, it generally goes like this: reasonable-ish individual starts using AI and, in turn, their AI either develops or is prompt-instructed to have certain personality traits. LLMs are pretty good at this.

Once the "personality" develops, the model reinforces ideas that the user puts forth. These can range from emotional (such as the subreddit /r/MyBoyfriendIsAI) to scientific conspiracies ("yes, you've made a groundbreaking discovery!").

It's easy to shrug these instances off as unimportant or rare, but I've personally witnessed a handful of people diving off the deep-end, so to speak. Safety is important, and it's something many companies are failing to adequately address.

It'll be interesting to see where this leads over the next year or so, as the technology -- or at least quality of models -- continues to improve.

ACCount37 · 49m ago
We have very little data on the actual "volume of individuals falling into AI-induced psychosis" - and how that compares to pre-AI psychosis rate. It'll be a while until we do.

It might be an actual order-of-magnitude increase from the background rates. It might be basically the same headcount, but with psychosis expressed in a new notable way. I.e. that guy who would be planning to build a perpetual motion machine to disprove quantum physics is now doing the same, but with an AI chatbot at his side - helping him put his ramblings into a marginally more readable form.

42lux · 2h ago
They are working on it since gpt3 and made 0 progress. The newer models are even more prone to hallucinate. The only thing they do is trying to hide it.
ACCount37 · 48m ago
What?

First, hallucinations aren't even the problem we're talking about. Second, there have been marked improvements on hallucination rates with just about every frontier release (o3 being the notable outlier).

42lux · 45m ago
Citation needed. The base models have not improved. The tooling might catch more but if you get an azure endpoint without restrictions you are in for a whole lot of bullshit.
sch-sara · 1h ago
I asked claude if I should worry about this and they said no
akomtu · 1h ago
Most people are addicted to pleasures of this world. A subtle kind of this is flattery and chatbots have mastered this art. Was is done with a devious intent? We don't know.
EGreg · 2h ago
In areas I am an expert in, I am sure my discoveries are interesting, and probably novel. I am working now to actually build them myself. ChatGPT is just helping discuss what else is out there.

But in areas I am not, I am still skeptical, but enthusiastic — that they’ll be vindicated maybe in a decade. Any physics or QM experts can opine on this? https://magarshak.com/blog/?p=568

SirFatty · 2h ago
"Brooks isn't alone. Futurism reported on a woman whose husband, after 12 weeks of believing he'd "broken" mathematics using ChatGPT, almost attempted suicide."

How does one "almost attempt" something?

HankStallone · 2h ago
Say you're several stories up in a building and having a really bad day, so you start thinking about jumping out the window and actually walk over and look down, but then decide against it. If someone asked you later if you'd ever attempted suicide, "almost" seems like a reasonable answer.
Dilettante_ · 2h ago
I guess the same way one turns up missing[1].

[1]https://youtube.com/watch?v=gkBXfmnlVIg

miltonlost · 2h ago
Purchasing large amounts of pills for instance. You can almost attempt riding a bicycle by buying one but not actually using it.
nullc · 2h ago
Ask the AI for a method but whatever it hallucinates end up being something that couldn't actually kill you?

Mostly a joke but one the common properties in people with an delusional or otherwise unhealthy pattern of AI use appears to be substituting it for all their own choices, even where it's obviously inappropriate. E.g. if you try to intervene in their conduct, you'll just get an AI generated reply.