AI Act (EU)
The AI Act is the European Union regulation that governs artificial intelligence. It sorts AI systems by risk and places obligations on anyo...
Read definitionA hallucination is when an AI model says something that sounds plausible but isn't true. The model invents details that aren't in any source data and delivers them with full confidence, as if they were facts. It's one of the core risks of generative AI.
A hallucination is a response from an AI model that sounds plausible but is factually wrong. The model invents details, names, numbers, or sources that aren't in its training data or in the context it was given, and presents them with the same confidence as real facts.
The term is most often used for generative language models, but the same phenomenon shows up in image and audio AI. A classic example: ask an LLM for a specific scientific paper and the model produces a fictitious title, fictitious authors, and a fictitious journal, all formatted correctly.
Hallucinations aren't a bug you can fully patch out. They're a consequence of how language models work: they predict the most likely next word, not the most correct one. When the model has no certain knowledge about a topic, it still continues, because "I don't know" is statistically less likely than a smooth sentence.
Several underlying mechanisms contribute.
Probabilistic generation
An LLM picks each word based on probabilities. When several answers are roughly equally likely, the model can pick a path that is linguistically clean but factually wrong.
Gaps in training data
For some topics the model has seen little or conflicting information. Instead of falling silent, it fills the gap with patterns from similar contexts.
Outdated knowledge
Training data has a cut-off date. Questions about events after that date lead to answers based on old information, which often reads as hallucination.
Overreliance on patterns
Models learn that medical papers have certain structures, that Belgian company names often end with "NV" or "BV", that citations have a certain shape. That pattern awareness produces output that looks credible, even when the content is invented.
Ambiguous prompts
A vague or contradictory prompt forces the model to fill in gaps itself. Those fill-ins are often the first place hallucinations show up.
It helps to split hallucinations by what exactly goes wrong.
Factual hallucination
The model states something concrete that's demonstrably wrong. "The capital of Belgium is Antwerp" or "ISO 27001 was first published in 2010" (it was 2005).
Invented sources
Citations, book titles, author names, or URLs that look real but don't exist. A classic in legal and academic contexts, and the source of several embarrassing incidents in US courts.
Logical hallucination
Individual facts check out, but the reasoning leads to a wrong conclusion. A calculation that slips a wrong intermediate step, for example.
Instruction hallucination
The model claims to have done something it didn't: "I analysed your file" when no file was provided, or "I queried the database" when no tool was available.
Contradictory hallucination
Within the same answer, the model contradicts itself, or strays from explicit instructions in the prompt.
In 2023 a US lawyer submitted a legal brief that cited six cases invented by ChatGPT. The lawyer was fined, and the incident became a textbook example of what goes wrong when you don't verify model output.
In medical settings, AI chatbots have been caught inventing dosages or contraindications that don't exist, with potentially serious consequences for patient safety.
In business contexts the most common pattern is that models invent amounts, dates, or policy details when they have no access to the actual source. A classic: ask about a leave policy and the model combines fragments from different countries into a fictional one.
You can't eliminate them entirely, but you can drastically reduce the likelihood with a mix of technical and organisational measures.
Grounding via RAG
Feed the model the source documents explicitly and ask for answers with citations. Without this step the model guesses. With it, the model starts from facts.
Tool calling for hard facts
For calculations, dates, and business data the model calls a tool or API rather than computing or guessing itself.
Clear system prompt
Instruct the model explicitly to say "I don't know" when information is missing. It sounds trivial, but it makes a measurable difference.
Lower temperature
For factual tasks, turn the temperature down so the model stays conservative instead of creative.
Verification and evaluation
Build automated checks: look up citations in your real sources, cross-reference figures, flag inconsistencies. Add periodic human review on top.
Human-in-the-loop
For critical outputs (medical, legal, financial) a human always approves before the result goes to the customer.
Hallucination and bias are sometimes used interchangeably, but they refer to different things.
A hallucination is a factual error: the model says something that's objectively wrong. Correction is usually black and white.
Bias is a structural skew in output: the model favours or disadvantages certain groups, topics, or phrasings based on patterns in its training data. The output can be technically "correct" yet still undesirable.
In practice the two often appear together. A hallucination can amplify bias (by inventing a stereotype, for example), and a biased dataset can steer hallucinations in a certain direction. Treating them as separate risks in your AI policy is worth the effort.
The AI Act is the European Union regulation that governs artificial intelligence. It sorts AI systems by risk and places obligations on anyo...
Read definitionAn AI agent is an AI system that autonomously plans and executes multiple steps to reach a goal. It uses a language model as its brain and c...
Read definitionArtificial intelligence is technology that teaches computers to learn, reason, and make decisions from data instead of following hand-writte...
Read definitionBias in AI is a skew that creeps into models through data, algorithms, or human choices. It is not always harmful, but it has to be managed ...
Read definitionBottleneck analysis finds the step in a process where work gets stuck waiting, the step that dictates total throughput time. You spot bottle...
Read definition
Collect&Go and Telenet Business are testing an autonomous electric delivery cart in Leuven, steered over 5G. What it means for logistics and...
Ten practical steps to automate your business processes without AI hype. Start small, fix the process first, use the tools you already own, ...