AI hallucinations are getting worse – and they're here to stay

10 OutOfHere 2 5/10/2025, 2:36:11 AM newscientist.com ↗

Comments (2)

lsy · 4h ago
One thought technology for understanding "hallucination" is that LLMs can only predict a fact statistically using all of the syntax available in its training data. This means that when you ask for a fact, you are really asking the computer to "postcast", or statistically predict the past, based on its training data.

That's why it "hallucinates", because sometimes the prediction of the past is wrong about the past. This differs from what people do, in that we don't see the past or present as a statistical field, we see them as concrete and discrete. And once we learn a sufficiently believable fact we generally assume it to be fully true, pending information to the contrary.

OutOfHere · 4h ago
In my experience, this is an issue with the newer reasoning models only, e.g. o3, o4-mini. It is not an issue with gpt-4.5.

o3 loves to hallucinatine a couple of assertions toward the end of a lengthy response.