> Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.
moduspol · 1h ago
I continue to be surprised that LLM providers haven't been legally cudgeled into neutering the models from ever giving anything that can be construed as medical advice.
I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).
qwertylicious · 1h ago
This is the story of the modern tech industry at large: a major new technology is released, harms are caused, but because of industry norms and a favourable legal environment, companies aren't held liable for those harms.
It's pretty amazing, really. Build a washing machine that burns houses down and the consequences are myriad and severe. But build a machine that allows countless people's private information to be leaked to bad actors and it's a year of credit monitoring and a mea culpa. Build a different machine that literally tells people to poison themselves and, not only are there no consequences, you find folks celebrating that the rules aren't getting in the way.
Go figure.
usrnm · 59m ago
How many people do you think the early steam engines killed? Or airplanes
pjc50 · 57s ago
Quite a lot. Boiler explosions were common until a better understanding was reached of the technology. Is this supposed to be an argument in its favor?
qwertylicious · 56m ago
Or sweatshops or radium infused tinctures.
We've moved on from the 1800s. Why are you using that as your baseline of expectation?
throwaway173738 · 46m ago
I think they were suggesting that LLMs are a nascent technology and we’d expect them to kill a bunch of people in preventable accidents before being heavily regulated.
api · 52m ago
There's a very common belief that things like regulations and especially liability simply halts all innovation. You can see some evidence for this point of view from aerospace with its famous "if it hasn't already flown, it can't fly" mentality. It's why we are still using leaded gasoline in small planes, though this is finally being phased out... but it took an unreasonably long time due to certification requirements and bureaucracy.
If airplanes weren't so heavily regulated we'd have seen leaded gasoline vanish there around the same time it did in cars, but you also might have had a few crashes due to engine failures as the bugs were worked out with changes and retrofits.
I'm a little on the fence here. I don't want a world where we basically conduct human sacrifice for progress, but I also don't want a world that is frozen in time. We really need to learn how to have responsible, careful progress, but still actually do things. Right now I think we are bad at this.
Edit: I think it's related to some extent to the fact that nuanced positions are hard in politics. In popular political discourse positions become more and more simple, flat, and extreme. There's a kind of context collapse that happens when you try to scale human organizations, what I want to call "nuance collapse," that makes it very hard to do anything but all A or all B. For innovation it's "full speed ahead" vs "stop everything."
qwertylicious · 50m ago
I have nothing to add other than to say this is, IMO, exactly right. I have no notes.
specproc · 53m ago
Considerably fewer when regulated.
DanielHB · 44m ago
Probably because they are actually pretty good at that task.
A lot of diagnosis process is pattern matching symptoms/patient-history to disease/condition and those to drug/treatment.
Of course LLMs can always fail catastrophically which needs to be filtered through proper medical advice.
> We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy, and have leveled up GPT-5’s performance in three of ChatGPT’s most common uses: writing, coding, and health.
That was the first time I'd seen "health" listed as one of the most common uses of ChatGPT.
mikepurvis · 50m ago
In a country where speaking to a medical professional can cost hundreds of dollars, I’m 0% surprised that a lot of people’s first reaction is to ask the free bot about their symptoms, or drop a picture of whatever it is for a quick analysis.
This is a natural extension of webmd type stuff, with the added issue that hypochondriacs can now get even more positive reinforcement that they definitely have x, y, and z rare and terrible diseases.
bee_rider · 48m ago
If regulators turn in their direction they can just do s/health/wellness/ to continue giving unregulated medical advice.
z7 · 50m ago
Meanwhile this new paper claims that GPT-5 surpasses medical professionals in medical reasoning:
"On MedXpertQA MM, GPT-5 improves reasoning and understanding scores by +29.62% and +36.18% over GPT-4o, respectively, and surpasses pre-licensed human experts by +24.23% in reasoning and +29.40% in understanding."
To me this the equivalent of asking why water doesn't contain large red warning labels "toxic if over consumed, death can follow". Yeah it's true and it's also true that some people can't handle LLM for their life. I'd expect the percentage for both are so vanishingly small that it just is not something we should care about. I even expect that LLM not giving out any medical information will lead to much more suffering. Except now it's hidden.
pahkah · 48m ago
This seems like a case of tunnel vision and confirmation bias, the nasty combo that sycophantic LLMs make easy to fall prey to. Someone gets an idea, asks about it, and the LLM doesn’t ask about the context or say that doesn’t make sense, it just plays along, “confirming” that that the idea was correct.
I’ve caught myself with this a few times when I sort of suggest a technical solution that, in hindsight, was the wrong way to approach a problem. The LLM will try to find a way to make that work without taking a step back and suggesting that I didn’t understand the problem I was looking at.
willguest · 38m ago
I think an award ceremony would be the best way to draw attention to the outrageous implications of blindly following artificial "intelligence" to wherever it may lead you. something like the Darwin awards, but dedicated to clanker wankers, a term I am coining for those people who are so self-absorbed that they feel a dispropotionate sense of validation from a machine that is programmed to say "you're absolutely right" at every available juncture.
That's not to say this isn't rooted in mental illness, or perhaps a socially-motivated state of mind that causes a total absence of critical thinking, but some kind of noise needs to be made and I think public (ahem) recognition would be a good way to go.
A catchy name is essential - any suggestions?
polonbike · 23m ago
A-Idiocraty
amiga386 · 53m ago
Oh no, the man used the hallucination engine, which told the man, in a confident tone, a load of old twaddle.
The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.
The humans touting it and bigging it up, so they'll get money, are the problem.
jdonaldson · 44m ago
Humans make mistakes too. Case in point, the hallucination engine didn't tell the person to ingest bromide. It only mentioned that it had chemical similarities to salt. The human mistakenly adopted a bit of information that furthered his narrative. The humans touting and bigging it up are still the problem.
wzdd · 33m ago
Could you provide a source for your statements? The article says that they don’t have access to the chat logs, and the quotes from the patient don’t suggest that chatgpt did not tell him to ingest bromide.
DiabloD3 · 52m ago
As a reminder to the wider HN: LLMs are only statistical models. They cannot reason, they cannot think, they can only statistically (and non-factually) reproduce what they were trained on. It is not an AI.
This person, sadly, and unfortunately, gaslit themselves using the LLM. They need professional help. This is not a judgement. The Psypost article is a warning to professionals more than it is anything else: patients _do_ gaslight themselves into absurd situations, and LLMs just help accelerate that process, but the patient had to be willing to do it and was looking for an enabler and found it in an LLM.
Although I do believe LLMs should not be used for "chat" models, and only explicitly, on-rails, text completion and generation tasks (in the functional lorem ipsum sense), this does not actually seem to be the fault of the LLM directly.
I think providers should be forced to warn users that LLMs cannot factually reproduce anything, but I think this person would have still weaponized LLMs against themselves, and this would have been the same outcome.
bitwize · 40m ago
In his poem "The Raven", Edgar Allan Poe's narrator knows, at least subconsciously, that the bird will respond with "nevermore" to whatever is asked of it. So he subconsciously formulates his queries to it in such a way that the answers will deepen his madness and sorrow.
People are starting to use LLMs in a similar fashion -- to confirm and thus magnify whatever wrong little notion they have in their heads until it becomes an all-consuming life mission to save the world. And the LLMs will happily oblige because they aren't really thinking about what they're saying, just choosing tokens on a mathematical "hey, this sounds nice" criterion. I've seen this happen with my sister, who is starting to take seriously the idea that ChatGPT was actually created 70 years ago at DARPA based on technology recovered from the Roswell crash, based on her conversations with it.
I can't really blame the LLMs entirely for this, as like the raven they're unwittingly being used to justify whatever little bit of madness people have in them. But we all have a little bit of madness in us, so this motivates me further to avoid LLMs entirely, except maybe for messing around.
infecto · 1h ago
Have a large part of the population always been susceptible to insane conspiracies and psychosis or is this recent phenomenon? The feels less of a ChatGPT problem and something more is at play.
BlackFly · 55m ago
The psychosis was due to bromism (bloodstream bromine buildup to toxic levels) due to health advice to replace sodium chloride with sodium bromide in an attempt to eliminate chloride from his diet. The bromide suggestion is stated as coming from ChatGPT.
The doctors actually noticed the bromine levels first and then inquired about how it got to be like that and got the story about the individual asking for chloride elimination ideas.
Before there was ChatGPT the internet had trolls trying to convince strangers to follow a recipe to make beautiful crystals. The recipe would produce mustard gas. Credulous individuals often have such accidents.
infecto · 22m ago
Sure but this is in line with people falling into psychosis events because ChatGPT agrees with them. I am curious how much of the population is at risk for this. It’s hard for me to comprehend but clearly we have scammers that take advantage of old people and it works.
jalk · 1h ago
I.e. Bleach against covid
edit: A quick google search shows, there is no evidence of anybody actually in-gesting/jecting bleach to fight COVID
No comments yet
voidUpdate · 55m ago
Yes, absolutely. Many people have fallen for things like "Bleach will cure autism", "vaccines cause autism", "9/11 was an inside job", "the moon landings were fake" etc
keybored · 1h ago
How can technology be the problem?—it’s people, obviously
There’s one on every thread in this place.
infecto · 48m ago
Did you have anything worth adding or are you here to simply be a low quality troll? I understand you have a religious level of belief in your opinion that it clouds your ability to communicate with others but perhaps ask thoughtful questions before lowering the quality.
I am not absolving technology but as someone who has never been impacted by the problems it amazes me that so many people get caught up like this and I simply wonder if it’s always there but the internet and increased communication makes it easier.
keybored · 32m ago
> Did you have anything worth adding or are you here to simply be a low quality troll? I understand you have a religious level of belief in your opinion that it clouds your ability to communicate with others but perhaps ask thoughtful questions before lowering the quality.
And you?
infecto · 25m ago
So typical of the low quality post to try to turn it right back around. No sorry, you are the one opening up with hyperbolic comments that add no value. I am genuinely curious if this has always been an issue throughout all of humanity. And I only reply to you in the hope that you actually add constructive thought but clearly not.
morkalork · 1h ago
Yes. I see this is as the digital equivalent of the people who convince themselves to believe that colloidal silver will heal their ailments and turn themselves blue
CyberDildonics · 1h ago
I think religion caught a lot of people with community and self righteous beliefs. Now religious thinking is bleeding over into other sources of misinformation.
incomingpain · 2h ago
>for the past three months, he had been replacing regular table salt with sodium bromide. His motivation was nutritional—he wanted to eliminate chloride from his diet, based on what he believed were harmful effects of sodium chloride.
Ok so the article is blaming chatgpt but this is ridiculous.
Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide
and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.
jazzyjackson · 1h ago
Agreed, but, it is in the spices aisle if your spice aisle is amazon.com
throwaway173738 · 55m ago
You haven’t lived if you haven’t tried my Bromine Chicken. I make it every Christmas.
beardyw · 55m ago
> Ok so the article is blaming chatgpt but this is ridiculous.
People are born with certain attributes. Some are born tall, some left handed and some gullible. None of those is a reason to criticise them.
bluefirebrand · 1h ago
> I dont care what chatgpt said... that dude is the problem.
This reminds me of people who fall for phone scams or whatever. Some number of the general population is susceptible to being scammed, and they wind up giving away their life's savings or whatever to someone claiming to be their grandkid
There's always an asshole saying "well that's their own fault if they fall for that stuff" as if they chose it on purpose instead of being manipulated into it by other people taking advantage of them
See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
crazygringo · 54m ago
> See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
I have literally never seen that expressed on HN.
In every case where workers are screwed out of shares, the sympathy among commenters seems to be 100% with the employees. HN is pretty anti-corporate overall, if you haven't noticed. Yes it's pro-startup but even more pro-employee.
rowanG077 · 55m ago
Both can be true at the same time. You can be an idiot to fall far a scam while the scammer is a dickhead criminal.
Workaccount2 · 1h ago
I expect a barage of these headline grabbing long tail stories being pushed out of psychology circles as more and more people find ChatGPT more helpful than their therapist (which is already becoming very popular).
We need to totally ban LLMs from doing therapy like conversations, so that a pinch of totally unhinged people don't do crazy stuff. And of course everyone needs to pay a human for therapy to stop this.
davidmurdoch · 1h ago
Is your last paragraph sarcasm?
Workaccount2 · 1h ago
Firmly
hereme888 · 1h ago
The man followed insane health advice given by GPT 3.5. we're at v5. Very outdated report.
headinsand · 1h ago
It’s about time to rename this community “ostrich news”.
permo-w · 56m ago
>his exact interaction with ChatGPT remains unverified
there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him to about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia
throwaway173738 · 49m ago
Yeah you found the paragraph where they highlight that there don’t know what interaction with ChatGPT gave him that information. The reason they’re sharing the anecdote is because there might be a new trend developing in medicine where people go to the ED after taking advice from an LLM that leads to injury and maybe screening questions should include asking about that.
dpassens · 46m ago
And if Wikipedia didn't warn that Sodium Bromide was poisonous, would that not be irresponsible? Chemistry websites seem different because, presumably, their target audience is chemists who can be trusted not to consume random substances.
Relevant section:
> Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.
I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).
It's pretty amazing, really. Build a washing machine that burns houses down and the consequences are myriad and severe. But build a machine that allows countless people's private information to be leaked to bad actors and it's a year of credit monitoring and a mea culpa. Build a different machine that literally tells people to poison themselves and, not only are there no consequences, you find folks celebrating that the rules aren't getting in the way.
Go figure.
We've moved on from the 1800s. Why are you using that as your baseline of expectation?
If airplanes weren't so heavily regulated we'd have seen leaded gasoline vanish there around the same time it did in cars, but you also might have had a few crashes due to engine failures as the bugs were worked out with changes and retrofits.
I'm a little on the fence here. I don't want a world where we basically conduct human sacrifice for progress, but I also don't want a world that is frozen in time. We really need to learn how to have responsible, careful progress, but still actually do things. Right now I think we are bad at this.
Edit: I think it's related to some extent to the fact that nuanced positions are hard in politics. In popular political discourse positions become more and more simple, flat, and extreme. There's a kind of context collapse that happens when you try to scale human organizations, what I want to call "nuance collapse," that makes it very hard to do anything but all A or all B. For innovation it's "full speed ahead" vs "stop everything."
A lot of diagnosis process is pattern matching symptoms/patient-history to disease/condition and those to drug/treatment.
Of course LLMs can always fail catastrophically which needs to be filtered through proper medical advice.
> We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy, and have leveled up GPT-5’s performance in three of ChatGPT’s most common uses: writing, coding, and health.
That was the first time I'd seen "health" listed as one of the most common uses of ChatGPT.
This is a natural extension of webmd type stuff, with the added issue that hypochondriacs can now get even more positive reinforcement that they definitely have x, y, and z rare and terrible diseases.
"On MedXpertQA MM, GPT-5 improves reasoning and understanding scores by +29.62% and +36.18% over GPT-4o, respectively, and surpasses pre-licensed human experts by +24.23% in reasoning and +29.40% in understanding."
https://arxiv.org/abs/2508.08224
I’ve caught myself with this a few times when I sort of suggest a technical solution that, in hindsight, was the wrong way to approach a problem. The LLM will try to find a way to make that work without taking a step back and suggesting that I didn’t understand the problem I was looking at.
That's not to say this isn't rooted in mental illness, or perhaps a socially-motivated state of mind that causes a total absence of critical thinking, but some kind of noise needs to be made and I think public (ahem) recognition would be a good way to go.
A catchy name is essential - any suggestions?
The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.
The humans touting it and bigging it up, so they'll get money, are the problem.
This person, sadly, and unfortunately, gaslit themselves using the LLM. They need professional help. This is not a judgement. The Psypost article is a warning to professionals more than it is anything else: patients _do_ gaslight themselves into absurd situations, and LLMs just help accelerate that process, but the patient had to be willing to do it and was looking for an enabler and found it in an LLM.
Although I do believe LLMs should not be used for "chat" models, and only explicitly, on-rails, text completion and generation tasks (in the functional lorem ipsum sense), this does not actually seem to be the fault of the LLM directly.
I think providers should be forced to warn users that LLMs cannot factually reproduce anything, but I think this person would have still weaponized LLMs against themselves, and this would have been the same outcome.
People are starting to use LLMs in a similar fashion -- to confirm and thus magnify whatever wrong little notion they have in their heads until it becomes an all-consuming life mission to save the world. And the LLMs will happily oblige because they aren't really thinking about what they're saying, just choosing tokens on a mathematical "hey, this sounds nice" criterion. I've seen this happen with my sister, who is starting to take seriously the idea that ChatGPT was actually created 70 years ago at DARPA based on technology recovered from the Roswell crash, based on her conversations with it.
I can't really blame the LLMs entirely for this, as like the raven they're unwittingly being used to justify whatever little bit of madness people have in them. But we all have a little bit of madness in us, so this motivates me further to avoid LLMs entirely, except maybe for messing around.
The doctors actually noticed the bromine levels first and then inquired about how it got to be like that and got the story about the individual asking for chloride elimination ideas.
Before there was ChatGPT the internet had trolls trying to convince strangers to follow a recipe to make beautiful crystals. The recipe would produce mustard gas. Credulous individuals often have such accidents.
edit: A quick google search shows, there is no evidence of anybody actually in-gesting/jecting bleach to fight COVID
No comments yet
There’s one on every thread in this place.
I am not absolving technology but as someone who has never been impacted by the problems it amazes me that so many people get caught up like this and I simply wonder if it’s always there but the internet and increased communication makes it easier.
And you?
Ok so the article is blaming chatgpt but this is ridiculous.
Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide
and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.
People are born with certain attributes. Some are born tall, some left handed and some gullible. None of those is a reason to criticise them.
This reminds me of people who fall for phone scams or whatever. Some number of the general population is susceptible to being scammed, and they wind up giving away their life's savings or whatever to someone claiming to be their grandkid
There's always an asshole saying "well that's their own fault if they fall for that stuff" as if they chose it on purpose instead of being manipulated into it by other people taking advantage of them
See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
I have literally never seen that expressed on HN.
In every case where workers are screwed out of shares, the sympathy among commenters seems to be 100% with the employees. HN is pretty anti-corporate overall, if you haven't noticed. Yes it's pro-startup but even more pro-employee.
We need to totally ban LLMs from doing therapy like conversations, so that a pinch of totally unhinged people don't do crazy stuff. And of course everyone needs to pay a human for therapy to stop this.
there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him to about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia