AI hallucinations are getting worse – and they're here to stay

8 greyadept 5 5/10/2025, 2:52:39 PM newscientist.com ↗

Comments (5)

kazinator · 48m ago
> But ["hallucination"] can also refer to an AI-generated answer that is factually accurate, but not actually relevant to the question it was asked, or fails to follow instructions in some other way.

No, "hallucination" can't refer to that. That's a non sequitur or non-compliance and such.

Hallucination is quite specific, referring to making statements which can be interpreted as referring to the circumstances of a world which doesn't exist. Those statements are often relevant; the response would be useful if that world did coincide with the real one.

If your claim is that hallucinations are getting worse, you have to measure the incidences of just those kinds of outputs, treating other forms of irrelevance as a separate category.

metalman · 32m ago
AI is becoming that problematic tenant in a building, who presented well, and had great references, but is now bumming money from everbody, stealing peoples mail and reading before putting it back,cant pat there power bill, and wanders around talking to squirls We should build some sort of half way house, where the AI's can get therapy and some one to keep them on there meds, and do the group living thing till they, maybe, can join society. The last thing we need is some sort of turbo charged A+List psycho beaming itself into everybodys lives, but hey whatever! right!, people got to do what people got to do, and part of that is shrugging off all the hype and noise. I just keep doubling down on reality, it seems to come naturaly :)
allears · 2h ago
Of course they're here to stay. LLMs aren't designed to tell the truth, or to be able to separate fact from fiction. How could they, given that their training data includes both, and there's no "understanding" there in the first place? Naturally, the most straightforward solution is to redefine "intelligence" and "truth," and they're working on that.
kazinator · 47m ago
Even if training data contains nothing but truths, you cannot always numerically interpolate among truths.
etaioinshrdlu · 1h ago
The creators are definitely trying to make them tell the truth. They optimize for benchmarks where truthful answering gets a higher score. All the big LLM vendors now have APIs that can ground their answers in search results.

Just because it's a hard unsolved problem, I don't understand the impulse to assert the AI industry is on a war with truth!