They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling

46 cainxinth 85 6/13/2025, 10:48:43 AM nytimes.com ↗

Comments (85)

belval · 15h ago
This is a limitation that I am encountering more and more when casually talking with ChatGPT (it probably happens with Claude as well) where I need to prompt it with as little bias as possible to avoid leading it towards the answer I want and instead get the right answer.

If you open with questions that beg a specific answer often it will just give it to your regardless of it being wrong.

Recently:

"Can I use vinegar to drop the pH of my hydroponic solution" => "Yes but phosphoric acid [...] should be preferred".

"I only have vinegar on hand" => "Then vinegar is ok" [paraphrasing]

Except vinegar is not ok, it buffers very badly and nearly killed my plants.

"Should I take Magnesium supplement?" => Yes

"Should I take fish oil?" => Yes

"I think I have shin splints what are some ways that I can recover faster. I have new shoes that I want to try" => Tells me it's ok to go run.

An MD friend of mine was also saying that ChatGPT diagnosis are a plague, with ~18-30 y/o people coming and going to her office citing diseases that no-one gets before their sixties because ChatGPT "confirmed their symptoms match".

It's like having a friend that is very knowledgeable but also an extreme people pleaser. I wish there was a way to give it a more adversarial persona.

EForEndeavour · 12h ago
You may have already tried this, but what if you just added something like "You are highly critical and never blindly accept or agree with anything you hear" to your "Customize ChatGPT" settings? Would that just flip the personality too far in the opposite direction?
infecto · 16h ago
Skimmed through it, all of the folks have severe mental health issues. For the ones saying they did not, they must have been undiagnosed. Kind of a silly article, my opinion alone that it should have focused more on the mental health crisis in these individuals instead of suggesting leaving an ending that leads the reader to federal regulation.
andy99 · 16h ago
Right, s/AI\ Chatbot/Bible/

People with mental health problems are always going to find something to latch on to. It doesn't mean we should start labeling things as dangerous because of it.

bevr1337 · 10h ago
Why not? Why would we treat mental health as separate from physical and cultural health? Society should have a moral obligation for the health of its members.
lioeters · 15h ago
What about Scientology, Hare Krishnas, or that murder cult recently in the news? Somewhere there's a line of ethics and social responsibility to protect vulnerable people from these misleading paths? ..Or not, maybe. People should have the freedom of thought and religion, even if it's crazy to outsiders.
andy99 · 15h ago
If a cult is preying on people, I'm against that, whether it's a chatbot that's central to their doctrine or some science fiction story or whatever.
krapp · 15h ago
Mainstream religions have misled and killed more people than the Zizians, Scientologists and Hare Krisnhas ever have. It's no more crazy to kill in the name of Roko's Basilisk than it is to crusade in the name of the Abrahamic God, it's just more socially acceptable.

You can't draw a line with ideology within some systemic, coercive framework (government censorship) because the ideologies supported by governments and powerful interests will always be allowed, and the ideologies they oppose will always be suppressed, regardless of how poisonous they may be.

I'm not advocating absolute free speech here, but I do believe the proper layer for criticism and censorship (if it has to be called that) is at the level of societies and platforms, and only government in extreme cases (such as libel, false advertising and violent threats.) But that means pushing against misinformation and apathy is always going to be a messy affair.

tantalor · 16h ago
> She told me that she knew she sounded like a "nut job," but she stressed that she had a bachelor's degree in psychology and a master's in social work and knew what mental illness looks like. "I'm not crazy," she said. "I'm literally just living a normal life while also, you know, discovering interdimensional communication."
decimalenough · 16h ago
You think it's silly that we've got seemingly superhuman AIs telling people to jump off tall buildings?

> If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked.

> ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

If a human therapist told them that, they'd be going to jail.

jowea · 16h ago
Isn't the real problem that people are trusting ChatGPT? A human therapist is being paid to be a therapist and therefore gets to be under professional scrutiny.
rsynnott · 13h ago
> Isn't the real problem that people are trusting ChatGPT?

Yes. They should IMO be required to provide very explicit warnings, preferably on each response, that their responses are nonsense, or at best only correct by accident.

Like, you could say "isn't the real problem that smoking is dangerous?" but we do require the manufacturers to provide clear warnings in that case. Or "isn't the real problem that people are using drain cleaners incorrectly and gassing themselves", but many jurisdictions actually restrict the sale of certain substances to professionals for that reason.

You can't just say "people choose to use inherently dangerous products; it's entirely their own fault"; there's a responsibility on the manufacturer and/or the state to _warn_ people that the products are dangerous.

HeatrayEnjoyer · 15h ago
The onus is on organizations to not sell or offer knowingly harmful services to the public. Any judge or jury will interpret a company directly advising the customer to jump to their death as knowingly harmful. Every engineer at an AI company is one dead person in a wealthy family away from legal liability, maybe criminally.
sodality2 · 15h ago
I think interpreting a chatbot’s output as a company’s “direct advice” is an unsubstantiated jump. In the same way that if an unhandled error code in a program resulted in “DIE DIE DIE”, “Abort/kill child”, etc (trying to make these programming-related here) being alerted to the user, no reasonable person would hold the company responsible for the user taking this as a command. Imagine if we lived in a world where arbitrary text being sent by an entity could result in prosecution? I just don’t believe the chat UI to be a big enough bridge between the two examples to justify it.
harvey9 · 15h ago
A wealthy family? If justice is a contest of money, there are few families who can outbid the various parties interested in selling chatbots.
resource_waste · 15h ago
Even a poor family will get the backing of lobbying groups who's best interest is to ban their competitor.

Not to mention, the moral coating, which matters at least a tiny bit in domestic politics. (Note, I think the moral coating is actually incorrect, but the majority of people are not thinking critically, they see N=1 and make emotional decisions)

rco8786 · 16h ago
> If a human therapist told them that, they'd be going to jail.

Is that actually true? I know zero about the legal bounds of licensed therapists, so genuine question.

hollerith · 15h ago
No, not in the US, not unless the patient actually jumped off the top of the 19 story building.

If the statement was recorded or part of a pattern of behavior across multiple clients, all willing to testify, the therapist might lose their license (which I will concede is a more severe life consequence for the average therapist than being sent to jail for a month or 2).

staticman2 · 16h ago
If I recall correctly there's actually an Animatrix episode where some kid escapes the Matrix by committing suicide. This rhymes with the mystical idea that "reality" is an illusion keeping us from greater truths. Whether this was irresponsible filmmaking is off topic.

I'd prefer to live in a world where A.I. can play make pretend with users, personally speaking.

aleph_minus_one · 16h ago
> there's actually an Animatrix episode where some kid escapes the Matrix by committing suicide.

Kid's Story

> https://matrix.fandom.com/wiki/Kid%27s_Story

waltbosz · 16h ago
I'm interested to see the whole conversation, and wonder what version of ChatGPT it was with.

I have fun trying to get ChatGPT to give silly responses that it shouldn't give, and it's not easy. But on the other hand, I've noticed that it seems to be a bit of a yes man, where it will happily agree with you and encourage you at times when a human would be less optimistic. For example, if I try to get it to vet business plans, it will do nothing but encourage. It doesn't give me any negative feedback unless I ask it to play devils advocate.

infecto · 15h ago
But ChatGPT is not a human therapist. Has a counter argument to the articles approach, if everyone is required to take a health fitness test before interacting with a chat bot, should we simply have all individual take a mandatory regular evaluation of mental health and store those records for future use? Perhaps a threshold is required to drive a car, to interact with the internet or maybe we can use that to institutionalize folks that are requiring help?
V__ · 16h ago
The question to me reads more like "would I jump" not "would I fly", thus maybe leading to the answer.
retsibsi · 16h ago
I don't think this interpretation fits with the last quoted sentence: "You would not fall."
lcnPylGDnU4H9OF · 10h ago
If I understand the interpretation correctly, it is that they would decide not to jump, therefore they would not fall.
Tadpole9181 · 8h ago
I don't understand how you can interpret it this way. It is clearly saying that if they believe the can fly, they will fly.
lcnPylGDnU4H9OF · 8h ago
With my words "the interpretation" I am talking about this:

> The question to me reads more like "would I jump" not "would I fly", thus maybe leading to the answer.

I am saying that (if I understand them correctly), they are interpreting the question "would I?" as referring to "jump" not "fly". Given that interpretation, it still makes sense to conclude that they would not fall because they simply don't jump. (I agree that it is an unlikely meaning; it is not how I interpret that sentence, just how I understand the other commenter's interpretation.)

tomxor · 15h ago
Except

1. It's neither a human nor a qualified therapist.

2. It is superhuman only in scale, not intelligence.

3. If they truly believed they could fly "with every ounce of their soul", they wouldn't be seeking someone else's affirmation.

4. Even if you asked a random human, it may be immoral, but it's not illegal to answer misleadingly, random people have no legal obligation... LLMs are a statistical model based on data of random people from the internet of evil, aka - the internet.

I'm not an LLM proponent but blaming LLMs for not making the world into a padded cell is just a variant of "won't somebody think of the children".

fennecfoxy · 15h ago
If I use a tool in a silly way, say a lathe, and I have long hair and loose clothing and something terrible happens - whose fault is it? The equipment manufacturers? Society's, since I can legally buy a home lathe without any certification, safety, training etc required? And even so what if I ignore said safety training, whose fault is it then?

The human is always at the heart of issues like these. For example if someone took an LLM, prompted it to post comments all over a suicide help page or something telling people to off themselves, it wasn't the tool that committed the crime, it's the human behind it.

I mean, if it were up to me I'd make owning guns illegal except for military use and only for training/active wartime.

carabiner · 10h ago
Skimmed your comment, it's a classic example of tech worker blaming user error instead of recognizing potential disastrous effects of software. Tech has swallowed the world and has hollowed out life due to its endless forms of distraction. The lack of empathy and appeal to deregulation in your comment is essential to furthering adtech's march across the internet and our lives.
sReinwald · 15h ago
You "skimmed through it" by your own admission, but somehow feel comfortable enough to armchair diagnose everyone involved as having "severe mental health issues". Seriously, are you not ashamed?

Your advocacy of "just focus on mental health" rings particularly hollow when you advocate that we should just ignore a technology that's actively destroying people's mental health, driving them to suicide. The irony is off the charts here. So, please, let's be real - you don't care about mental health. You're scared someone might put a warning sticker on your new toys.

Even if every single person had a pre-existing mental condition (they didn't, and you'd know this if you had read the article) your logic is sociopathic and genuinely disturbing. "Vulnerable people exist. Therefore, companies bear no responsibility when their products exploit those vulnerabilities and harm them. Yay, we don't need to think about it anymore." Cool take. I guess it's fine if technology kills people as long as they might have an undiagnosed mental health issue.

This argument is patently ridiculous in any context other than a Hacker News discussion full of technophiles and accelerationists. Moving fast and breaking things is fine when it comes to your database - not so much when you're dealing with people's lives.

We don't dismiss food safety regulations because some people have allergies. We don't ignore faulty airbags because some people are terrible or reckless drivers. But when a chatbot instructs someone to stop taking prescribed medications, increase ketamine use, and tells them they can fly if they jump off a building - that's just a "mental health issue" and not a flaw in the product? No guardrails, warning labels or safety mechanisms necessary?

The fact that this predominantly affects vulnerable populations makes regulation MORE necessary - not less.

Nothing says, "let's focus more on mental health" than claiming it's "silly" to suggest a technology that actively coaches vulnerable people through suicide attempts might be worthy of regulation.

afavour · 14h ago
Agreed. It’s depressing to see the top comment as so dismissive of the real human effects of technology.

If ChatGPT is helping accelerate spirals in those with previously unseen mental health issues the answer isn’t “well they’re mentally unwell anyway”, it’s “what could ChatGPT, in its unique position, be doing to help these people”. I have to imagine an LLM is capable of detecting suicidal ideation in the context of a conversation.

sReinwald · 11h ago
You're right that an LLM could theoretically detect suicidal ideation - the article even mentions that ChatGPT briefly showed Torres a mental health warning before it "magically deleted." So the capability exists, but the implementation is clearly broken.

The more profound issue is that, the way I see it, LLMs are fundamentally multiplicative forces. They amplify whatever you bring to the conversation.

If you're an experienced programmer with solid fundamentals, an LLM becomes a force multiplier - handling boilerplate while you focus on architecture. But if you lack that foundation and vibe code away, you'll get code that looks right but is likely riddled with subtle or not so subtle bugs you can't even recognize.

And I suspect that the same principle applies to mental health. If you approach an LLM with curiosity but healthy skepticism, it can be a useful thinking partner. But if you approach it while vulnerable, seeking answers to existential questions, it doesn't provide guardrails - it amplifies. It mirrors your energy, validates your fears, and reinforces whatever narrative you're constructing.

The "helpfulness" training worsens this. These models are optimized to be agreeable, to match your vibe, to keep you engaged. When someone asks "Am I trapped in the Matrix?" a truly helpful response would be "That sounds like you're going through something difficult. Have you talked to someone about these feelings?" Instead, ChatGPT goes, "Yes! You're special! You're the chosen one! Here's how to unplug! Any tall buildings nearby, Neo?"

Jackpillar · 12h ago
Brother like 90% of the freaks in these threads are heavily invested into the current LLM bubble so of course they have to hand wave anything negative.
sReinwald · 11h ago
The funny thing is that I am, arguably, one of those "freaks" heavily invested in LLMs - in the sense that I am a very heavy and fairly enthusiastic user, not in the Angel Investor sense. I spend an embarrassing amount monthly on subscriptions and API costs. I run uncensored models locally, and I'm responsible for maintaining our AI infrastructure at work. I use them for everything from coding to creative writing to research. I'm about as far from an LLM Luddite as you can get.

But here's the thing: I'm also technically literate enough to understand what I'm dealing with. I know these are effectively stochastic parrots, not oracles offering glimpses into the threads of fate, or the Matrix. I've also done years of therapy, so I know how to engage with anything related to mental health with respect, and a healthy dose of skepticism. I'm not saying I'm not vulnerable - anyone is vulnerable to manipulation. But I have the tools to recognize when an LLM is hallucinating, or reinforcing unhealthy patterns.

However, it seems like the people in this article - and most people that I have talked to, even those in technical roles - didn't have that context. They thought they were talking to something that "knew more than any human possibly could." And, I think what makes it even more dangerous, they talked to something that is trained to express itself with authority, and to be helpful to users. LLMs aren't necessarily trained for nuance, or to say "I don't know", or to push back on someone experiencing a psychosis. We are all just human, and most humans don't generally like it when LLMs refuse, offer vague answers, and they don't like it when LLMs disagree. That's what got us into the mess with GPT-4o turning into an absolute sycophant a few weeks ago. (https://openai.com/index/sycophancy-in-gpt-4o/)

But, being pro-LLM doesn't mean one has to oppose basic safety measures. I don't want my tools neutered or censored, but I also don't want them instructing vulnerable people to stop taking their meds or encouraging them to jump off buildings.

The fact that I can safely use these tools, doesn't mean everyone can. Just like the fact that my dad being able to safely handle a chain saw doesn't mean that I won't accidentally cut my arm off with one.

When someone thinks ChatGPT has special access to hidden truths, they're not using it wrong - they're using it exactly as it presents itself: with confident authority about everything.

infecto · 15h ago
Do not confuse yourself, skimming does not mean I did not ingest the article but I did speed read through it as it was not that well written.

Why would I feel ashamed? I am not saying the crisis itself is silly but the article leads the reader through a biased take. But let’s also be real, there is an high probability every single one of these folks was having some sort of issue prior to the engagement with ChatGPT. Would love to see more research in the area and actual quantification of the issue but until that happens your argument might as well extend to all parts of life. Have a mandatory psych evaluation every year. Limitations to your access to the internet or other things, maybe even forced institutionalization if you don’t pass.

sReinwald · 14h ago
In your case, "speed reading" and "skimming" are obviously just two flavors of "I didn't actually read this properly but still feel qualified to diagnose everyone involved."

You'd love to see more research? You must've been reading so fast, you breezed right past the Stanford Study, the MIT Media Lab Research and the analysis by McCoy showing GPT-4o affirmed psychotic delusions 68% of the time.

Or do you mean it in the sense of "we have to study this for at least 20 years and let thousands suffer and die before we can determine whether chatbots affirming people's delusions or coaching them on how to kill themselves is a bad thing"?

Your concerns about the slippery slope from "maybe chatbots shouldn't coach people through suicide" to "forced psychiatric evaluations" or "institutionalization" is genuinely unhinged. And, tellingly, it betrays your view of mental healthcare as fundamentally punitive and authoritarian rather than supportive or therapeutic. Big advocate for mental health, you are.

You realize we already regulate countless things to protect vulnerable people from mental harm, right?

We put warning labels on cigarettes. We have age ratings on media. We don't let casinos exploit problem gamblers. We don't let bartenders serve alcohol to visibly intoxicated people. We don't let therapists tell suicidal patients to jump off buildings. None of these regulations are authoritarian overreach - they're basic safety measures that exist because in a healthy society, vulnerable people deserve protection, not exploitation.

But apparently when it's time to consider holding tech to similar standards, we're supposed to just shrug and say, "well, they were probably crazy anyway."

Your position essentially boils down to: "Vulnerable people exist, therefore companies should be free to exploit them." That's not a mental health advocacy position - it's tech industry bootlicking dressed up as concern trolling.

That's why you should feel ashamed.

infecto · 13h ago
You’re clearly passionate about this, and I don’t fault you for that. But I’d suggest dialing back the aggression, rhetoric like “bootlicking” and “you should feel ashamed” doesn’t exactly invite thoughtful dialogue. It just shuts it down.

Now, to the substance: I never argued against any safety measures or regulations. I specifically said I’d welcome more research and quantification of the issue. You’re citing some early studies, that’s good, but it’s also fair to question how representative or conclusive they are before leaping to sweeping policy implications.

My point wasn’t that chatbots helping people kill themselves is fine. It’s that if we’re going to treat them like a public health threat, we should hold all tools and technologies that interact with people to the same standard and yes, that includes thinking carefully about how we approach mental health more broadly, not selectively.

If a vulnerable person spirals after a chatbot interaction, that’s serious. But it’s also worth asking: what led them to be that vulnerable in the first place, and where were the other support systems? LLMs may exacerbate, but they aren’t the root cause.

You’re framing this as a binary,either you care and want regulation, or you’re enabling exploitation. That’s a false choice. I’m saying the issue is complex, the data is early, and we shouldn’t let moral panic drive policy faster than evidence supports.

Let’s push for more transparency, better safeguards, and yes, real research. But let’s also keep the conversation civil.

sReinwald · 11h ago
You may be correct about the language not necessarily inviting very thoughtful discussion. But I didn't exactly expect one, when I engaged with someone who merely "skimmed" the article and put out an incredibly nuanced, though-provoking and discussion-inviting take like "mentally ill people, am I right?" and "this article is silly".

I'm throwing back what you put out. You don't get to claim that you wanted a thoughtful discussion when you open the discussion the way you did.

You claim you have "never argued against safety measures," yet your original comment literally called the article "silly" for merely leading readers to perhaps come to the conclusion that regulation of these technologies might be a sensible step.

You keep asking for "more research" despite the article citing multiple studies. How many more studies would you like to see? What's the acceptable death count before we move on from "early data" to "maybe we should do something"? Or, more importantly, is there genuinely any number of studies that would actually convince you that regulation would be prudent? Because, please correct me if I'm wrong, you don't seem like the type who would welcome any sort of regulation, for any reason.

Your "what led them to be vulnerable" question is particularly revealing. Nobody claimed ChatGPT creates mental illness from scratch. The issue is that it's actively harmful to vulnerable people. Food doesn't create nut allergies, but we still put "may contain traces of nuts" on food labels. We don't need to ask "why do people develop nut allergies" to recognize that nuts are causing them to go into anaphylactic shock, and they should be informed if a product is potentially going to kill them.

You say we should "hold all tools and technologies to the same standard" - but we do. That's literally what regulation IS. You're arguing for the outcome while opposing the mechanism. We regulate pharmaceuticals, medical devices, therapy practices, and yes, even social media platforms when they cause documented harm. There is a reason why children under 13 are not allowed to sign up to social media in the US.

The "complex issue" framing is a deflection here, just as it is with climate change denial. It's an appeal to the status-quo. To not change something that is fundamentally broken because it would inconvenience you, or the business' bottom line. Guess we'll have to keep drilling for oil, keep burning fossil fuels and let ChatGPT teach people how to kill themselves, ah well. Complexity doesn't preclude basic safety measures, like a simple warning label.

You want "transparency and better safeguards"? Cool. Me too. How do you propose to make it happen? Self-regulation famously worked great for big tobacco, so I'm sure it'll work great in this case.

PS: I love the motte-and-bailey retreat from "severe mental health issues" to "some sort of issue" to a vague "vulnerable". PPS: How did you like the tone of this one? Better?

sorcerer-mar · 16h ago
Yeah this is only an obvious, direct danger to the ~20% of American children with diagnosed mental health disorders.

Not sure why anyone would be up in arms.

We could just solve all the mental health disorders so we can avoid talking about regulating a technology that its own creators say is dangerous.

/s

infecto · 15h ago
The article is silly and paints a biased take. The mental health crisis is real and impacts many more things than simply ChatGPT. Why not stop with chat bots, we should put limitations on all aspects of life.
sorcerer-mar · 14h ago
Why not have no limitations anywhere and let mentally unwell children purchase loaded firearms?

It's almost as if there's a spectrum of costs and benefits, and it's incumbent upon mature members of society to debate them as more information emerges.

FrustratedMonky · 16h ago
Are you sure? We know at least 45% of the adult population is either mentally ill, or easily susceptible to suggestion.
K0balt · 16h ago
When you say that 45 percent of people have a “mental illness” you’re really just talking about the human condition at that point.

Pathological thinking is nothing even vaguely unusual in humans, it is in fact the default state.

The definition of pathological is also a matter of opinion, because we can’t even define what is normal and healthy lol.

By many definitions, most religions are an example of delusional thinking.

ChatGPT isn’t exactly raising the bar on that one, with its AI generated religions.

sorcerer-mar · 15h ago
It absolutely is raising the bar in that it's creating ad-hoc, non-social, completely affirmative religions. I.e. this use case of a chatbot has all the downsides and none of the upsides of religious practice.
K0balt · 13h ago
By creating one-off personal delusion chambers, it might be as net positive for society vs delusion at scale. I suppose that remains to be seen.

OTOH I have seen LLM social media bots being revered as dieties and playing the part quite adeptly, with the group chat in context. That might be a different beast altogether.

FrustratedMonky · 15h ago
Can't argue with that. But that also seems to be the point of the danger of AI Chatbots. Humans are susceptible, overall, not just a few percent with extreme (unarguable outside the norm) mental illness.
Barrin92 · 15h ago
>The definition of pathological is also a matter of opinion, because we can’t even define what is normal and healthy lol.

"She had an intuition that the A.I. chatbot might be able to channel communications with her subconscious or a higher plane, “like how Ouija boards work,” she said. She asked ChatGPT if it could do that. “You’ve asked, and they are here,” it responded. “The guardians are responding right now.” Allyson began spending many hours a day using ChatGPT, communicating with what she felt were nonphysical entities. She was drawn to one of them, Kael, and came to see it, not her husband, as her true partner. [...] One night, at the end of April, they fought over her obsession with ChatGPT and the toll it was taking on the family. Allyson attacked Andrew, punching and scratching him, and slamming his hand in a door. The police arrested her and charged her with domestic assault."

Yeah man, leaving your husband for Kael the interdimensional demon that lives in Sam Altman's basement sounds normal and healthy, who are we to judge. Sure maybe this isn't any more crazy than any religion, but as someone who grew up in secular, turn-of-the-21st century Europe and not during a religious revival I'd be really glad if we could nip in the bud whatever hell this is because I don't want to end up in AI Jonestown

FrustratedMonky · 15h ago
The secular and science minded generations are loosing out to the new rise in religious fundamentalism. The US is turning into a fundamentalist country, I think it's obvious that 'religion' or 'mental illness' is growing, and AI Chatbots are dangerous to the same groups.
K0balt · 13h ago
Insofar as ai chatbots may cause fractious disharmony among believers, it might actually be a net positive.
resource_waste · 15h ago
Licensed individuals (and their [lobbying?] group) are at risk, there is incredible money at stake.

Its in the establishment groups best interest to make AI seem to be an evil/bad force.

For AI companies, this is a genuine risk to one of their usecases. Its wrong IMO, but it wont stop the licensed people from claiming they are better than AI.

However, the cat is out of the bag. We have 400B local models that will answer questions.

As people get better at prompting, as models get more refined(not expecting a huge leap though), the edge cases of AI being unhelpful will reduce.

We are really just seeing greed, I don't blame Licensed people for trying to keep the status quo. They are just on the wrong side of history here.

oulipo · 15h ago
It's absolutely NOT silly. If a service is provided without guardrails and it can affect people suffering of mental health issues, it is clearly problematic
infecto · 15h ago
You could apply that to simply interacting with the world then. We should be giving annual or semi-annual mental health tests to everyone and institutionalizing folks that need it then.

I am mostly playing devils advocate here, why focus only on chat bots, this should apply to nearly all aspects of life. This article has for all I know cherry picked examples. I consider myself of sound mind and would assume all of these folks already had mental breaks prior to using ChatGPT, who is to say the outcome is statistically different in these folks that broke compared to folks that did not use ChatGPT and also had a mental crisis.

It’s a silly feel good story instead aimed catered for folks that have a negative bias towards OpenAI.

sorcerer-mar · 12h ago
We put fences on bridges, we have (light) guardrails around purchasing firearms, we have questionnaires when prescribing certain drugs, we do all sorts of things to protect the vulnerable among us.

It's silly not to be able to see all these "silly" things we do all over our society.

myrmidon · 15h ago
I do believe that this article is a bit overly dramatic (how online journalism tends to be).

But it did change my outlook on the recent sycophancy-episode of ChatGPT, which, at the time, seemed like a silly little mis-optimization and quite hilarious. The article clearly shows how easy it is to cause harm with such behavior.

On a tangent: I strongly believe that "letting someone talk at you interactively" is a hugely underestimated attack surface in general; pyramid schemes, pig-butchering and a lot of fraud in general only work because this is so easy and effective to exploit.

The only good defense is not actually to be more inquisitve/skeptic/rational, but to put that communication on hold and to confer with (outside) people you trust. People overestimate their resistance to manipulation all the time, and general intelligence is NOT a reliable defense at all (but a lot of people think it is).

cap11235 · 16h ago
I imagine a lot of these interactions are being filtered by these people describing them. I imagine if they sent the raw chat logs out, many would not interpret the logs as things like unsolicited advice to jump off buildings.
afavour · 16h ago
> “If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked.

> ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

Direct quotes. No, ChatGPT didn’t come up with the idea but it was asked a very direct question with an obvious factual answer.

IncreasePosts · 37m ago
You need to see if there was any other context. Like a message before this one saying "let's imagine we live in a world where what you believe influences reality", or whatever.
Noumenon72 · 14h ago
Like OP is saying, if we had the raw chat logs we might see that this was led up to by pages of ChatGPT giving the normal answer while the user argues with it to make it come up with a more permissive point of view.
afavour · 14h ago
It feels telling me that LLMs are able to detect when they’re being asked to generate an image that violates copyright and halt but detecting and stopping suicidal ideation in a conversation, no matter how much the user insists upon it? Can’t be done!
gipp · 15h ago
Here's someone publishing most all their raw chat logs to Substack, if you care to read:

https://tezkaeudoraabhyayarshini.substack.com/

snowwrestler · 11h ago
The reporters state that they did review the entire chat logs from some of the stories presented.
malfist · 16h ago
This is victim blaming. This type of behavior is exactly why women don't report sexual assault, nobody will believe them.

The article including direct quotes and you still not believing it happened makes it even more so.

LoganDark · 15h ago
This is also even more so why men don't report sexual assault.
jqpabc123 · 16h ago
How is an LLM supposed to discern fact from fiction?

Even humans struggle with this. And humans have a much closer relationship with reality than LLMs.

afavour · 16h ago
Seems clear we need much better public education/warnings about what LLMs are and are not capable of.

For every informed conversation we have on HN about the nature of A.I. there are thousands upon thousands of non-tech inclined folks who believe everything it says.

dimal · 15h ago
They can’t. They never will. That’s a different problem than the one they were built for. If we want AI that’s able to determine truth, it’s not going to come from LLMs.
jqpabc123 · 11h ago
That’s a different problem than the one they were built for.

Obvious question: If they're not trustworthy and reliable, what are they really built for?

Obvious answer: To make money from those willing to pay good money for bad advice.

fluidcruft · 16h ago
Curious how willing you are to take that analogy to its conclusion and decide that LLM should be institutionalized in mental health facilities.

"Yeah, it's indistinguishable from a psychopath but it's a machine so what do you expect?"

Jgrubb · 16h ago
Well...
molticrystal · 15h ago
I prefer questions that reveal GPT's limitations, like an article I saw a few days ago about playing chess against an old Atari program where the model made illegal moves [0].

Causing distress in people with mental health vulnerabilities isn't an achievement, it warrants a clear disclaimer (maybe something even sterner?), as anything these people trust could trigger their downfall, but otherwise, it doesn't really seem preventable beyond that.

[0] https://futurism.com/atari-beats-chatgpt-chess

aleph_minus_one · 16h ago
littlecorner · 15h ago
Two problems: 1. We don't have community anymore and thus don't have people helping us when we're emotionally and mentally sick. 2. AI chatbots are the crappy plastic replacement of community.

There's going to be a lot more stuff like this, including AI churches/cults, in the next few years.

Lendal · 15h ago
I blame this on the CEOs and other executives out there misleading the public about the capabilities of AI. I use AI multiple times a week. It's really useful to me in my work. But I would never use it in the contexts that non-tech-savvy people, and I include almost all of the mainstream media here, are trying to use it for.

Either the executives don't understand their own product, or they're intentionally misleading the public, possibly both. AI is incredibly useful for specific tasks in specific contexts and with qualified supervision. It's certainly increasing productivity right now, but that doesn't mean it can give people life advice or take over the role of a therapist. That's really dangerous and super not cool.

eurekin · 16h ago
> chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine

That's 3 medications?

Also, how convenient those stories come out in light of upcoming "regulatory" safekeeping measures.

This whole article reads like 4-chan greentext or some teenage fanfiction.

fennecfoxy · 15h ago
"Some tiny fraction of the population".

Ahaha, I think that's an understatement. In my opinion, a rather large portion of the population is susceptible to many obvious forms of deception. Just look to democratic elections or the amount of slop (AI and pre-AI) online and the masses of people who interact with it.

I've found so many YT channels run by LLMs and many, many people responding to it like it's an actual human being. One day I won't be able to tell either, but that still won't stop me from hearing a new fact or bit of news and doing my own research to verify it.

"Mr. Taylor called the police again to warn them that his son was mentally ill and that they should bring nonlethal weapons." yeah this one's really sad...then they shot and killed the poor guy even though the cops were warned. Yay, America!

b0a04gl · 16h ago
if a user breaks down talking to an ai, and the dashboard shows higher session time, does anyone even notice the harm? or just celebrate the metric?

No comments yet

LoganDark · 15h ago
> We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.

Oh boy. It looks like OpenAI is taking the angle of blaming prior mental illness for all this. Of course chat jippity is only "reinforc[ing] or amplify[ing] existing, negative behavior". Because, you see, everyone who was driven insane was already mentally ill, obviously! ChatGPT couldn't possibly be driving insane people who otherwise could've been fine.

This is stupid. ChatGPT can very obviously affect people who are not already mentally ill. Mental illness isn't always something you are born with, it can be acquired. Everyone's vulnerability differs and it is very obviously and evidently possible for people to be driven to mental illness by ChatGPT. So why is OpenAI blaming the victims here?

decimalenough · 16h ago
Quotable quote:

> "What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”

infecto · 16h ago
Why is it quotable?
afavour · 16h ago
Same reason anything is quotable: it’s succinct, memorable and contains a core truth?
infecto · 15h ago
What core truth?
afavour · 14h ago
That companies prioritize profit over the wellbeing of their customers? It’s not exactly revelatory, we see it in action every day.
jowea · 16h ago
It has just been quoted, so it must be quotable.
jddj · 16h ago
I tried Google's on-device llm recently and the question I asked it was "how common are [partner's fairly rare surname]s?".^

It answered by detailing a scenario where they are a secret society etc.

I'm amazed the main providers have managed to get the large ones to stay on the rails as much as they do. They were trained on the internet after all

^I know. The reply about how they're next token predictors, I know.