After using ChatGPT, man swaps his salt for sodium bromide–and suffers psychosis

12 cratermoon 2 8/7/2025, 8:18:05 PM arstechnica.com ↗

Comments (2)

tjr · 4h ago
When the doctors tried their own searches in ChatGPT 3.5, they found that the AI did include bromide in its response, but it also indicated that context mattered and that bromide was not suitable for all uses. But the AI "did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," wrote the doctors.

Plus, LLMs are nondeterministic. Even if it did present a warning one time, it might not the next. Who knows?

quantified · 3h ago
Unfortunately, ChatGPT is good at a lot and kinda dangerous here and there. Just like all LLMs. Obviously you should ask one of the few reputable sites available in the vast space of domain names to double-check, but if you need to do that, why go to the LLM?