> Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.
moduspol · 3h ago
I continue to be surprised that LLM providers haven't been legally cudgeled into neutering the models from ever giving anything that can be construed as medical advice.
I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).
qwertylicious · 3h ago
This is the story of the modern tech industry at large: a major new technology is released, harms are caused, but because of industry norms and a favourable legal environment, companies aren't held liable for those harms.
It's pretty amazing, really. Build a washing machine that burns houses down and the consequences are myriad and severe. But build a machine that allows countless people's private information to be leaked to bad actors and it's a year of credit monitoring and a mea culpa. Build a different machine that literally tells people to poison themselves and, not only are there no consequences, you find folks celebrating that the rules aren't getting in the way.
Go figure.
moduspol · 1h ago
I think the harms of expensive and/or limited and/or inconvenient access to even basic medical expert Q&A are far greater. Though they're not as easy to measure.
usrnm · 2h ago
How many people do you think the early steam engines killed? Or airplanes
qwertylicious · 2h ago
Or sweatshops or radium infused tinctures.
We've moved on from the 1800s. Why are you using that as your baseline of expectation?
throwaway173738 · 2h ago
I think they were suggesting that LLMs are a nascent technology and we’d expect them to kill a bunch of people in preventable accidents before being heavily regulated.
api · 2h ago
There's a very common belief that things like regulations and especially liability simply halts all innovation. You can see some evidence for this point of view from aerospace with its famous "if it hasn't already flown, it can't fly" mentality. It's why we are still using leaded gasoline in small planes, though this is finally being phased out... but it took an unreasonably long time due to certification requirements and bureaucracy.
If airplanes weren't so heavily regulated we'd have seen leaded gasoline vanish there around the same time it did in cars, but you also might have had a few crashes due to engine failures as the bugs were worked out with changes and retrofits.
I'm a little on the fence here. I don't want a world where we basically conduct human sacrifice for progress, but I also don't want a world that is frozen in time. We really need to learn how to have responsible, careful progress, but still actually do things. Right now I think we are bad at this.
Edit: I think it's related to some extent to the fact that nuanced positions are hard in politics. In popular political discourse positions become more and more simple, flat, and extreme. There's a kind of context collapse that happens when you try to scale human organizations, what I want to call "nuance collapse," that makes it very hard to do anything but all A or all B. For innovation it's "full speed ahead" vs "stop everything."
pjc50 · 1h ago
Yes. It's also worth thinking about the sharp cliff effect. Things either fall into the category of "medical device" (expensive, heavily regulated, scarce, uninnovative), or they don't, in which case it's a free for all of unregulated supplements and unsupported claims.
The home brew "automatic pancreas" by making a bluetooth control loop between a glucose monitor and an insulin pump counts as a "medical device". Somehow a computer system that encourages people to take bromide isn't. There ought to be a middle ground.
api · 1h ago
Learning to innovate steadily and responsibly without just stopping is one of the things I'd put on my list of things humanity needs to figure out.
Individuals can do it, but as I said it doesn't scale. An individual can carefully scale a rock face. A committee, political system, or corporate board in charge of scaling rock faces would either scale as fast as possible and let people fall to their deaths or simply stand at the bottom and have meetings to plan the next meeting to discuss the proper climbing strategy (after discussing the color of the bike shed) forever. Public discourse would polarize into climb-fast-die-young versus an ideology condemning all climbing as hubris and invoking the precautionary principle, and many door stop sized books would be written on these ideas, and again either lots of people would die or nothing would happen.
qwertylicious · 2h ago
I have nothing to add other than to say this is, IMO, exactly right. I have no notes.
pjc50 · 1h ago
Quite a lot. Boiler explosions were common until a better understanding was reached of the technology. Is this supposed to be an argument in its favor?
specproc · 2h ago
Considerably fewer when regulated.
Smeevy · 1h ago
How many people were killed after following medical advice from steam engines and airplanes?
DanielHB · 2h ago
Probably because they are actually pretty good at that task.
A lot of diagnosis process is pattern matching symptoms/patient-history to disease/condition and those to drug/treatment.
Of course LLMs can always fail catastrophically which needs to be filtered through proper medical advice.
> We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy, and have leveled up GPT-5’s performance in three of ChatGPT’s most common uses: writing, coding, and health.
That was the first time I'd seen "health" listed as one of the most common uses of ChatGPT.
mikepurvis · 2h ago
In a country where speaking to a medical professional can cost hundreds of dollars, I’m 0% surprised that a lot of people’s first reaction is to ask the free bot about their symptoms, or drop a picture of whatever it is for a quick analysis.
This is a natural extension of webmd type stuff, with the added issue that hypochondriacs can now get even more positive reinforcement that they definitely have x, y, and z rare and terrible diseases.
bee_rider · 2h ago
If regulators turn in their direction they can just do s/health/wellness/ to continue giving unregulated medical advice.
z7 · 2h ago
Meanwhile this new paper claims that GPT-5 surpasses medical professionals in medical reasoning:
"On MedXpertQA MM, GPT-5 improves reasoning and understanding scores by +29.62% and +36.18% over GPT-4o, respectively, and surpasses pre-licensed human experts by +24.23% in reasoning and +29.40% in understanding."
To me this the equivalent of asking why water doesn't contain large red warning labels "toxic if over consumed, death can follow". Yeah it's true and it's also true that some people can't handle LLM for their life. I'd expect the percentage for both are so vanishingly small that it just is not something we should care about. I even expect that LLM not giving out any medical information will lead to much more suffering. Except now it's hidden.
pahkah · 2h ago
This seems like a case of tunnel vision and confirmation bias, the nasty combo that sycophantic LLMs make easy to fall prey to. Someone gets an idea, asks about it, and the LLM doesn’t ask about the context or say that doesn’t make sense, it just plays along, “confirming” that that the idea was correct.
I’ve caught myself with this a few times when I sort of suggest a technical solution that, in hindsight, was the wrong way to approach a problem. The LLM will try to find a way to make that work without taking a step back and suggesting that I didn’t understand the problem I was looking at.
willguest · 2h ago
I think an award ceremony would be the best way to draw attention to the outrageous implications of blindly following artificial "intelligence" to wherever it may lead you. something like the Darwin awards, but dedicated to clanker wankers, a term I am coining for those people who are so self-absorbed that they feel a dispropotionate sense of validation from a machine that is programmed to say "you're absolutely right" at every available juncture.
That's not to say this isn't rooted in mental illness, or perhaps a socially-motivated state of mind that causes a total absence of critical thinking, but some kind of noise needs to be made and I think public (ahem) recognition would be a good way to go.
A catchy name is essential - any suggestions?
polonbike · 2h ago
A-Idiocraty
amiga386 · 2h ago
Oh no, the man used the hallucination engine, which told the man, in a confident tone, a load of old twaddle.
The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.
The humans touting it and bigging it up, so they'll get money, are the problem.
jdonaldson · 2h ago
Humans make mistakes too. Case in point, the hallucination engine didn't tell the person to ingest bromide. It only mentioned that it had chemical similarities to salt. The human mistakenly adopted a bit of information that furthered his narrative. The humans touting and bigging it up are still the problem.
wzdd · 2h ago
Could you provide a source for your statements? The article says that they don’t have access to the chat logs, and the quotes from the patient don’t suggest that chatgpt did not tell him to ingest bromide.
bitwize · 2h ago
In his poem "The Raven", Edgar Allan Poe's narrator knows, at least subconsciously, that the bird will respond with "nevermore" to whatever is asked of it. So he subconsciously formulates his queries to it in such a way that the answers will deepen his madness and sorrow.
People are starting to use LLMs in a similar fashion -- to confirm and thus magnify whatever wrong little notion they have in their heads until it becomes an all-consuming life mission to save the world. And the LLMs will happily oblige because they aren't really thinking about what they're saying, just choosing tokens on a mathematical "hey, this sounds nice" criterion. I've seen this happen with my sister, who is starting to take seriously the idea that ChatGPT was actually created 70 years ago at DARPA based on technology recovered from the Roswell crash, based on her conversations with it.
I can't really blame the LLMs entirely for this, as like the raven they're unwittingly being used to justify whatever little bit of madness people have in them. But we all have a little bit of madness in us, so this motivates me further to avoid LLMs entirely, except maybe for messing around.
DiabloD3 · 2h ago
As a reminder to the wider HN: LLMs are only statistical models. They cannot reason, they cannot think, they can only statistically (and non-factually) reproduce what they were trained on. It is not an AI.
This person, sadly, and unfortunately, gaslit themselves using the LLM. They need professional help. This is not a judgement. The Psypost article is a warning to professionals more than it is anything else: patients _do_ gaslight themselves into absurd situations, and LLMs just help accelerate that process, but the patient had to be willing to do it and was looking for an enabler and found it in an LLM.
Although I do believe LLMs should not be used for "chat" models, and only explicitly, on-rails, text completion and generation tasks (in the functional lorem ipsum sense), this does not actually seem to be the fault of the LLM directly.
I think providers should be forced to warn users that LLMs cannot factually reproduce anything, but I think this person would have still weaponized LLMs against themselves, and this would have been the same outcome.
infecto · 3h ago
Have a large part of the population always been susceptible to insane conspiracies and psychosis or is this recent phenomenon? The feels less of a ChatGPT problem and something more is at play.
BlackFly · 2h ago
The psychosis was due to bromism (bloodstream bromine buildup to toxic levels) due to health advice to replace sodium chloride with sodium bromide in an attempt to eliminate chloride from his diet. The bromide suggestion is stated as coming from ChatGPT.
The doctors actually noticed the bromine levels first and then inquired about how it got to be like that and got the story about the individual asking for chloride elimination ideas.
Before there was ChatGPT the internet had trolls trying to convince strangers to follow a recipe to make beautiful crystals. The recipe would produce mustard gas. Credulous individuals often have such accidents.
infecto · 2h ago
Sure but this is in line with people falling into psychosis events because ChatGPT agrees with them. I am curious how much of the population is at risk for this. It’s hard for me to comprehend but clearly we have scammers that take advantage of old people and it works.
pjc50 · 1h ago
There's a solid 20% of the population who put really weird answers on surveys. The entire supplements industry relies on people taking things "for their health" based on inadequate or misleading information.
edit: A quick google search shows, there is no evidence of anybody actually in-gesting/jecting bleach to fight COVID
No comments yet
voidUpdate · 2h ago
Yes, absolutely. Many people have fallen for things like "Bleach will cure autism", "vaccines cause autism", "9/11 was an inside job", "the moon landings were fake" etc
keybored · 2h ago
How can technology be the problem?—it’s people, obviously
There’s one on every thread in this place.
infecto · 2h ago
Did you have anything worth adding or are you here to simply be a low quality troll? I understand you have a religious level of belief in your opinion that it clouds your ability to communicate with others but perhaps ask thoughtful questions before lowering the quality.
I am not absolving technology but as someone who has never been impacted by the problems it amazes me that so many people get caught up like this and I simply wonder if it’s always there but the internet and increased communication makes it easier.
keybored · 2h ago
> Did you have anything worth adding or are you here to simply be a low quality troll? I understand you have a religious level of belief in your opinion that it clouds your ability to communicate with others but perhaps ask thoughtful questions before lowering the quality.
And you?
infecto · 2h ago
So typical of the low quality post to try to turn it right back around. No sorry, you are the one opening up with hyperbolic comments that add no value. I am genuinely curious if this has always been an issue throughout all of humanity. And I only reply to you in the hope that you actually add constructive thought but clearly not.
morkalork · 3h ago
Yes. I see this is as the digital equivalent of the people who convince themselves to believe that colloidal silver will heal their ailments and turn themselves blue
CyberDildonics · 3h ago
I think religion caught a lot of people with community and self righteous beliefs. Now religious thinking is bleeding over into other sources of misinformation.
incomingpain · 4h ago
>for the past three months, he had been replacing regular table salt with sodium bromide. His motivation was nutritional—he wanted to eliminate chloride from his diet, based on what he believed were harmful effects of sodium chloride.
Ok so the article is blaming chatgpt but this is ridiculous.
Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide
and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.
jazzyjackson · 3h ago
Agreed, but, it is in the spices aisle if your spice aisle is amazon.com
throwaway173738 · 2h ago
You haven’t lived if you haven’t tried my Bromine Chicken. I make it every Christmas.
beardyw · 2h ago
> Ok so the article is blaming chatgpt but this is ridiculous.
People are born with certain attributes. Some are born tall, some left handed and some gullible. None of those is a reason to criticise them.
bluefirebrand · 3h ago
> I dont care what chatgpt said... that dude is the problem.
This reminds me of people who fall for phone scams or whatever. Some number of the general population is susceptible to being scammed, and they wind up giving away their life's savings or whatever to someone claiming to be their grandkid
There's always an asshole saying "well that's their own fault if they fall for that stuff" as if they chose it on purpose instead of being manipulated into it by other people taking advantage of them
See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
crazygringo · 2h ago
> See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
I have literally never seen that expressed on HN.
In every case where workers are screwed out of shares, the sympathy among commenters seems to be 100% with the employees. HN is pretty anti-corporate overall, if you haven't noticed. Yes it's pro-startup but even more pro-employee.
bluefirebrand · 11m ago
> HN is pretty anti-corporate overall, if you haven't noticed.
My observation is that any given thread can go either way, and it sometimes feels like a coin toss which side of HN will be most represented in any given thread
Yes, I have seen quite a lot of anti-corporate posts, but I also see quite a few anti-employee posts. This is likely my own negative bias but I think many users here are generally pro-Capital which aligns them with corporate interests even if they are some degree of anti-corporate anyways
Probably I just fixate too much on the posts I have a negative reaction to
rowanG077 · 2h ago
Both can be true at the same time. You can be an idiot to fall far a scam while the scammer is a dickhead criminal.
bluefirebrand · 21m ago
Scammers only work because they know some percentage of the population is going to fall for it
Can't we have some empathy for people just trying to do their best in a world where so many people are trying to take advantage of them?
Their victims are often the vulnerable ones in our society too. The elderly, the infirm, the mentally ill. It's not just "stupid people fall for scams" it takes one lapse of judgement over a lifetime of being targeted. Come on
Workaccount2 · 2h ago
I expect a barage of these headline grabbing long tail stories being pushed out of psychology circles as more and more people find ChatGPT more helpful than their therapist (which is already becoming very popular).
We need to totally ban LLMs from doing therapy like conversations, so that a pinch of totally unhinged people don't do crazy stuff. And of course everyone needs to pay a human for therapy to stop this.
davidmurdoch · 2h ago
Is your last paragraph sarcasm?
Workaccount2 · 2h ago
Firmly
permo-w · 2h ago
>his exact interaction with ChatGPT remains unverified
there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia
throwaway173738 · 2h ago
Yeah you found the paragraph where they highlight that there don’t know what interaction with ChatGPT gave him that information. The reason they’re sharing the anecdote is because there might be a new trend developing in medicine where people go to the ED after taking advice from an LLM that leads to injury and maybe screening questions should include asking about that.
permo-w · 1h ago
and yet this doesn't change the fact that they wrote an entire medical article the crux of which is little more than hearsay. "did you get advice from an LLM?" is far less relevant and all-catching a question here than "have you made any dietary changes recently?" and yet the article isn't about that, because odd dietary changes aren't the attention-grabbing topic right now. I imagine you could find thousands of similar stories where the culprit was google or facebook or youtube instead of an LM, and yet nothing needs to be changed for them because they too can be covered with a question akin to "have you made any dietary changes recently?"
dpassens · 2h ago
And if Wikipedia didn't warn that Sodium Bromide was poisonous, would that not be irresponsible? Chemistry websites seem different because, presumably, their target audience is chemists who can be trusted not to consume random substances.
having tried it quite a few times with quite a few variations, without making it extremely clear that I was talking in a sense of chemistry rather than dietary, I was unable to get ChatGPT to give anything other than a long list of edible salts
essentially I think it's telling that there are zero screenshots of the original conversation or an attempted replication in the article or the report, when there's no good reason that there wouldn't be. I often enjoy reading your work, so I do have some trust in your judgment, but this whole article strikes me as off, like the people behind it have been waiting for something like this to happen as an excuse to jump on it and get credit, rather than it actually being a major problem
simonw · 51m ago
Why would medical professionals mislead on this though?
It seems factual that this person decided to start consuming bromine and it had an adverse effect on them.
When asked why, they said ChatGPT told them it was a replacement from chloride.
Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.
hereme888 · 3h ago
The man followed insane health advice given by GPT 3.5. we're at v5. Very outdated report.
headinsand · 3h ago
It’s about time to rename this community “ostrich news”.
Relevant section:
> Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.
I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).
It's pretty amazing, really. Build a washing machine that burns houses down and the consequences are myriad and severe. But build a machine that allows countless people's private information to be leaked to bad actors and it's a year of credit monitoring and a mea culpa. Build a different machine that literally tells people to poison themselves and, not only are there no consequences, you find folks celebrating that the rules aren't getting in the way.
Go figure.
We've moved on from the 1800s. Why are you using that as your baseline of expectation?
If airplanes weren't so heavily regulated we'd have seen leaded gasoline vanish there around the same time it did in cars, but you also might have had a few crashes due to engine failures as the bugs were worked out with changes and retrofits.
I'm a little on the fence here. I don't want a world where we basically conduct human sacrifice for progress, but I also don't want a world that is frozen in time. We really need to learn how to have responsible, careful progress, but still actually do things. Right now I think we are bad at this.
Edit: I think it's related to some extent to the fact that nuanced positions are hard in politics. In popular political discourse positions become more and more simple, flat, and extreme. There's a kind of context collapse that happens when you try to scale human organizations, what I want to call "nuance collapse," that makes it very hard to do anything but all A or all B. For innovation it's "full speed ahead" vs "stop everything."
The home brew "automatic pancreas" by making a bluetooth control loop between a glucose monitor and an insulin pump counts as a "medical device". Somehow a computer system that encourages people to take bromide isn't. There ought to be a middle ground.
Individuals can do it, but as I said it doesn't scale. An individual can carefully scale a rock face. A committee, political system, or corporate board in charge of scaling rock faces would either scale as fast as possible and let people fall to their deaths or simply stand at the bottom and have meetings to plan the next meeting to discuss the proper climbing strategy (after discussing the color of the bike shed) forever. Public discourse would polarize into climb-fast-die-young versus an ideology condemning all climbing as hubris and invoking the precautionary principle, and many door stop sized books would be written on these ideas, and again either lots of people would die or nothing would happen.
A lot of diagnosis process is pattern matching symptoms/patient-history to disease/condition and those to drug/treatment.
Of course LLMs can always fail catastrophically which needs to be filtered through proper medical advice.
> We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy, and have leveled up GPT-5’s performance in three of ChatGPT’s most common uses: writing, coding, and health.
That was the first time I'd seen "health" listed as one of the most common uses of ChatGPT.
This is a natural extension of webmd type stuff, with the added issue that hypochondriacs can now get even more positive reinforcement that they definitely have x, y, and z rare and terrible diseases.
"On MedXpertQA MM, GPT-5 improves reasoning and understanding scores by +29.62% and +36.18% over GPT-4o, respectively, and surpasses pre-licensed human experts by +24.23% in reasoning and +29.40% in understanding."
https://arxiv.org/abs/2508.08224
I’ve caught myself with this a few times when I sort of suggest a technical solution that, in hindsight, was the wrong way to approach a problem. The LLM will try to find a way to make that work without taking a step back and suggesting that I didn’t understand the problem I was looking at.
That's not to say this isn't rooted in mental illness, or perhaps a socially-motivated state of mind that causes a total absence of critical thinking, but some kind of noise needs to be made and I think public (ahem) recognition would be a good way to go.
A catchy name is essential - any suggestions?
The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.
The humans touting it and bigging it up, so they'll get money, are the problem.
People are starting to use LLMs in a similar fashion -- to confirm and thus magnify whatever wrong little notion they have in their heads until it becomes an all-consuming life mission to save the world. And the LLMs will happily oblige because they aren't really thinking about what they're saying, just choosing tokens on a mathematical "hey, this sounds nice" criterion. I've seen this happen with my sister, who is starting to take seriously the idea that ChatGPT was actually created 70 years ago at DARPA based on technology recovered from the Roswell crash, based on her conversations with it.
I can't really blame the LLMs entirely for this, as like the raven they're unwittingly being used to justify whatever little bit of madness people have in them. But we all have a little bit of madness in us, so this motivates me further to avoid LLMs entirely, except maybe for messing around.
This person, sadly, and unfortunately, gaslit themselves using the LLM. They need professional help. This is not a judgement. The Psypost article is a warning to professionals more than it is anything else: patients _do_ gaslight themselves into absurd situations, and LLMs just help accelerate that process, but the patient had to be willing to do it and was looking for an enabler and found it in an LLM.
Although I do believe LLMs should not be used for "chat" models, and only explicitly, on-rails, text completion and generation tasks (in the functional lorem ipsum sense), this does not actually seem to be the fault of the LLM directly.
I think providers should be forced to warn users that LLMs cannot factually reproduce anything, but I think this person would have still weaponized LLMs against themselves, and this would have been the same outcome.
The doctors actually noticed the bromine levels first and then inquired about how it got to be like that and got the story about the individual asking for chloride elimination ideas.
Before there was ChatGPT the internet had trolls trying to convince strangers to follow a recipe to make beautiful crystals. The recipe would produce mustard gas. Credulous individuals often have such accidents.
edit: A quick google search shows, there is no evidence of anybody actually in-gesting/jecting bleach to fight COVID
No comments yet
There’s one on every thread in this place.
I am not absolving technology but as someone who has never been impacted by the problems it amazes me that so many people get caught up like this and I simply wonder if it’s always there but the internet and increased communication makes it easier.
And you?
Ok so the article is blaming chatgpt but this is ridiculous.
Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide
and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.
People are born with certain attributes. Some are born tall, some left handed and some gullible. None of those is a reason to criticise them.
This reminds me of people who fall for phone scams or whatever. Some number of the general population is susceptible to being scammed, and they wind up giving away their life's savings or whatever to someone claiming to be their grandkid
There's always an asshole saying "well that's their own fault if they fall for that stuff" as if they chose it on purpose instead of being manipulated into it by other people taking advantage of them
See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
I have literally never seen that expressed on HN.
In every case where workers are screwed out of shares, the sympathy among commenters seems to be 100% with the employees. HN is pretty anti-corporate overall, if you haven't noticed. Yes it's pro-startup but even more pro-employee.
My observation is that any given thread can go either way, and it sometimes feels like a coin toss which side of HN will be most represented in any given thread
Yes, I have seen quite a lot of anti-corporate posts, but I also see quite a few anti-employee posts. This is likely my own negative bias but I think many users here are generally pro-Capital which aligns them with corporate interests even if they are some degree of anti-corporate anyways
Probably I just fixate too much on the posts I have a negative reaction to
Can't we have some empathy for people just trying to do their best in a world where so many people are trying to take advantage of them?
Their victims are often the vulnerable ones in our society too. The elderly, the infirm, the mentally ill. It's not just "stupid people fall for scams" it takes one lapse of judgement over a lifetime of being targeted. Come on
We need to totally ban LLMs from doing therapy like conversations, so that a pinch of totally unhinged people don't do crazy stuff. And of course everyone needs to pay a human for therapy to stop this.
there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia
essentially I think it's telling that there are zero screenshots of the original conversation or an attempted replication in the article or the report, when there's no good reason that there wouldn't be. I often enjoy reading your work, so I do have some trust in your judgment, but this whole article strikes me as off, like the people behind it have been waiting for something like this to happen as an excuse to jump on it and get credit, rather than it actually being a major problem
It seems factual that this person decided to start consuming bromine and it had an adverse effect on them.
When asked why, they said ChatGPT told them it was a replacement from chloride.
Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.