Why “Hallucination” Is the Wrong Way to Talk About LLMs
Most people in AI circles now know what “hallucination” means in the context of language models: it’s when the model confidently invents information that isn’t true.
But the word itself — hallucination — brings a whole set of assumptions with it. That the model is trying to tell the truth. That it knows what’s real. That it’s having some kind of human-like mental break.
It’s not. This article — the one you’re reading now — is a good reminder of that.