AI Risk · Generative AI

Hallucination

LLMs can output information that is factually incorrect but presented as fact.

📋 Description

Hallucination occurs when an AI system, particularly a language model, produces factually incorrect or logically inconsistent information that appears authoritative. This can mislead users, propagate false information, or distort decision-making. Hallucinations stem from the probabilistic nature of language models, which generate text based on learned patterns rather than verified knowledge.

Hallucinations are particularly problematic in areas such as healthcare, legal interpretation, education, or journalism, where factual precision is critical. While mitigation techniques can reduce their frequency, human oversight and contextual awareness are crucial.

To an extent, hallucinations are inherent to LLMs because they sample words from a distribution and do not have a notion of "facts." Various prompting and system setups can reduce the likelihood of hallucinations, but it may be impossible to eliminate the risk, and user education is needed to set appropriate expectations.

🔍 Public Examples and Common Patterns

- AIID Incident 960: Plaintiffs' Lawyers Admit AI Generated Erroneous Case Citations in Federal Court Filing Against Walmart: Lawyers Rudwin Ayala, T. Michael Morgan (Morgan & Morgan), and Taly Goody (Goody Law Group) were fined a total of $5,000 after their Wyoming federal lawsuit filing against Walmart cited fake cases "hallucinated" by AI. Judge Kelly Rankin sanctioned them, removing Ayala from the case and noting that attorneys must verify AI sources. The filing, flagged by Walmart’s legal team, led to its withdrawal and an internal review.

- AIID Incident 464: ChatGPT Provided Non-Existent Citations and Links when Prompted by Users: When prompted about providing references, ChatGPT was reportedly generating non-existent but convincing-looking citations and links,

📐 External Framework Mapping

- MITRE ATLAS: AML.T0017 – Generation of Inaccurate Output
- MIT AI Risk Repository: 3.1 - False or misleading information
- Databricks AI Security Framework: 9.8 – LLM Hallucination
- IBM Risk Atlas: Hallucination
Cite this page
Trustible. "Hallucination." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/hallucination/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform