Show HN: AuraCoder – Gen AI Learning Platform (auracoder.com)
2 points by zsmith99 6h ago 0 comments
Show HN: I'm 13 and I built an AI PDF Reader (github.com)
5 points by pro-grammer 21h ago 1 comments
AI hallucinations are getting worse – and they're here to stay
16 greyadept 13 5/10/2025, 2:52:39 PM newscientist.com ↗
To add to your hyperbolic take, I think that they coukd always be listening to some degree makes it worse. If AI's are mandatory, I'd like to be able to run my own models everywhere they have one. I don't trust theirs.
No, "hallucination" can't refer to that. That's a non sequitur or non-compliance and such.
Hallucination is quite specific, referring to making statements which can be interpreted as referring to the circumstances of a world which doesn't exist. Those statements are often relevant; the response would be useful if that world did coincide with the real one.
If your claim is that hallucinations are getting worse, you have to measure the incidences of just those kinds of outputs, treating other forms of irrelevance as a separate category.
(Personally I never liked the term; it's inappropriate anthropomorphism and will tend to mislead people about what's actually going on. 'Slop' is arguably a better term, but it is broader, in that it can refer to LLM output which is merely _bad_.)
When MacBeth speaks these lines:
Is this a dagger which I see before me, The handle toward my hand? Come, let me clutch thee.
the character is understood to be hallucinating. We infer that by applying a theory of mind type hypothesis to the text.
It's wrong to apply a theory of mind to an LLM, but the glove seems to fit in the case of the hallucination concept; people have latched on to it. The LLMs themselves use the term and explicitly apologize for having hallucinated.
1. Information is often grounded in the senses whichh process real data. The brain can tell if new data is like what's actually real.
2. The brain has a multi-part, memory subsystem that's tied into its other subsystems. Only a few, artificial architectures had both neural networks and a memory system. One claims low hallucinations.
3. There's a part of the brain that's damaged in many delusional people. It might be an anti-hallucination mechanism.
4. We learn to trust specific people, like our parents, early on. Then, we believe more strongly what they teach us than what random people say.
5. We have some ability to focus on and integrate the information that's more important to us. We let go of or barely use the rest.
I think hallucinations are the weaknesses of man's architectural choices. Some might be built into the pretraining data. We wont know until the artificial, neural networks achieve parity in features I mentioned to God's neural network. The remaining differences in performance might be pretraining or other missing features.
... I mean, this is likely strictly true, if you define 'AI model' to mean 'any conceivable AI model'. If you're talking about LLMs, though, it's not a reasonable conclusion; LLMs do not work at all like a human brain. LLM 'hallucinations' are nothing like human hallucinations, and the term is really very unhelpful.
You're right that they don't work like the brain, though.
Just because it's a hard unsolved problem, I don't understand the impulse to assert the AI industry is on a war with truth!
I asked it to create some boilerplate and it presented me with a class function that I knew did not exist; though like many hallucinations it would have been very beneficial if it did.
So, instead of just pointing out that it didn't exist and getting the usual "Oh you're right, that function does not exist so use this function instead", I asked it why it gave me that function given that it has access to the header and an example project. It doubled down and stated that the function was in the header and the example project, even presenting a code sample it claimed was from the example project with the fake function.
It felt like a step up from the confidently incorrect state I'd seen before to a level where if it weren't for the fact that I'm knowledgeable enough about the class in question (or my ability to be able to check) then I'd possibly start questioning myself.