Man develops rare condition after ChatGPT query over stopping eating salt

19 vinni2 29 8/13/2025, 6:23:51 PM theguardian.com ↗

Comments (29)

slicktux · 18m ago
Does that title seem like a cluster to anyone else? I tried rewording it in my head but only came up with a slightly better solution: “Man develops rare condition after cutting consumption of salt do to a ChatGPT query”.
dfee · 4m ago
still haven't clicked in, but was confused.

still am, unless i re-interpret "do" as "due".

neom · 12m ago
It's clunky but I understood it immediately, I presumed from the title that it was going to be about your title, however I also thought it was a bit clunky.
jleyank · 13m ago
How about “man gets bromine poisoning after taking ChatGPT medical advice”?
brokencode · 5m ago
We don’t know whether ChatGPT gave medical advice. Only that it suggested using sodium bromide instead of sodium chloride. For what purpose or in what context, we don’t know. It may even have recommended against using it and the man misunderstood.
genter · 10m ago
I read it as "race condition". I was then trying to figure out what salt has to do with a race condition.
throwmeaway222 · 14m ago
You're absolutely right! If you stop eating salt you will become god!
neom · 9m ago
You're absolutely right! I was locked in with GPT5 last night and I actually discovered that salt is a geometric fractal containing a key insight that can be used by physicists everywhere to solve math. Don't worry, I've emailed everyone I can find about it.
foobarian · 8m ago
Why do you say that?
zahlman · 7m ago
MarkusQ · 22m ago
LLMs don't think. At all. They do next token prediction.

If they are conditioned on a large data set that includes lots of examples of the result of people thinking, what they produce will look sort of like the results of thinking, but then if they were conditioned on a large data set of people repeating the same seven knock knock jokes over and over and over in some complex pattern (e.g. every third time, in French), what they produced will look like that, and nothing like thinking.

Failing to recognize this is going to get someone killed, if it hasn't already.

cortic · 4m ago
I'm not sure humans are any different;

Humans don't think. At all. They do next token prediction.

If they are [raised in environments] that includes lots of examples of the result of people thinking, what they produce will look sort of like the results of people thinking, but then if they were [raised in an environment] of people repeating the same seven knock knock jokes over and over and over in some complex pattern (e.g. every third time, in French), what they produced will look like that, and nothing like thinking.

I believe this can be observed in examples of feral children and accidental social isolation in childhood. It also explains the slow start but nearly exponential growth of knowledge within the history of human civilization.

MangoToupe · 2m ago
Sure, but you can hold humans liable for their advice. Somehow I doubt this will be allowed to happen with chatbots.
hcdx6 · 13m ago
Are you thinking over every character you type? You are conditioned too by all the info flowing into your head from birth. Does that gauruntee everything your brain says and does is perfect?

People believed in non existent WMDs and tens of thousands got killed. After that what happened ? Chimps with 3 inch brains feel super confident to run orgs and make decisions that effect entire populations and are never held accountable. Ask Snowden what happened after he recognized that.

uh_uh · 9m ago
> LLMs don't think. At all.

How can you so confidently proclaim that? Hinton and Ilya Sutskever certainly seem to think that LLMs do think. I'm not saying that you should accept what they say blindly due to their authority in the field, but their opinions should give your confidence some pause at least.

dgfitz · 1m ago
>> LLMs don't think. At all.

>How can you so confidently proclaim that?

Do you know why they're called 'models' by chance?

They're statistical, weighted models. They use statistical weights to predict the next token.

They don't think. They don't reason. Math, weights, and turtles all the way down. Calling anything an LLM does "thinking" or "reasoning" is incorrect. Calling any of this "AI" is even worse.

nimbius · 21m ago
yeah sure, but, did it enrich the shareholders?
kevinventullo · 6m ago
Is anyone else getting tired of these articles?

“Area man who had poor judgement ten years ago now has both poor judgement and access to chatbots”

MangoToupe · 1m ago
It would be a little less tiring if we were to prosecute the folks responsible.
unyttigfjelltol · 4m ago
The US medical system practically requires patients to steer their care among specialists. If the gentleman steered himself to a liver doctor, he’d hear liver advice. Psychologist, he’d talk about his feelings. Can one really blame him for taking it one step further and investigating whatever he was worried about on his own too?

Plus, if you don’t have some completely obvious dread disease, doctors will essentially gaslight you.

These researchers get up on a pedestal, snicker at creative self-help, and ignore the systemic dysfunction that led to it.

some_random · 32m ago
Is it just me or is the title kinda unclear?

>The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a three-month period. This was despite reading that “chloride can be swapped with bromide, though likely for other purposes, such as cleaning”. Sodium bromide was used as a sedative in the early 20th century.

In any case, I feel like I really need to see the actual conversation itself to judge how badly chatgpt messed up, if there's no extra context assuming that the user is talking about cleaning doesn't seem _that_ unreasonable.

Flozzin · 28m ago
The article digs a little deeper after. Saying the chat records are lost, and that when they asked ChatGPT, it didn't give that guidance about cleaning purposely only, and that it never asked why they wanted to know.

Really though, this could have just as easily happened in a google search. It's not ChatGPT's fault as much as this persons fault for using a non-medical professional for medical guidance.

zahlman · 8m ago
>and that it never asked why they wanted to know.

Does ChatGPT ever ask the user, like, anything?

zahlman · 8m ago
Wow. I thought this was just going to be about hyponatremia or something. (And from other research I've done, I really do think that on balance the US experts are recommending inappropriately low levels of sodium intake that are only appropriate for people who already have hypertension, and that typical North American dietary levels of sodium are just fine, really.) But replacing table salt with sodium bromide? Oof.

> to judge how badly chatgpt messed up, if there's no extra context assuming that the user is talking about cleaning doesn't seem _that_ unreasonable.

This would be a bizarre assumption for the simple reason that table salt is not normally used in cleaning.

OJFord · 27m ago
Yeah I thought it was a bit misleading too, it's not exactly 'stopping salt' that caused it, any more than you could describe the ill-effects of swapping nasturtiums for lily of the valley in your salads as 'avoiding edible flowers'.
kragen · 9m ago
He was poisoning himself for three months before he was treated, and apparently made a full recovery:

> He was tapered off risperidone before discharge and remained stable off medication at a check-in 2 weeks after discharge.

https://www.acpjournals.org/doi/epdf/10.7326/aimcc.2024.1260

If you eliminated sodium chloride from your diet without replacing it with another sodium source, you would die in much less than three months; I think you'd be lucky to make it two weeks. You can't replace sodium with potassium or lithium to the same degree that you can replace chloride with bromide.

topaz0 · 13m ago
I'd say that the thing that messed up was the AI hype machine for pretending it might ever be a good idea to take chatgpt output as advice.
bell-cot · 31m ago
Same news, Ars Technica, 5 comments, 5 days ago: https://news.ycombinator.com/item?id=44829824
HelloUsername · 27m ago