Document Type
Working Paper
Abstract
"Hallucination" has become the common term for errors by AI systems, yet it implies a misleading analogy to human perception. LLM's process tokens. They do not have conscious experience or conscious perception. A hallucination is an experience, and (to our knowledge, to date) LLM's do not experience anything. This paper suggests that legal scholars follow the lead of a small number of AI researchers who have suggested that "confabulation" is a more accurate term, a metaphor grounded in psychology. People confabulate when they unknowingly invent spurious explanations or facts. We then take this terminological question and stretch it into a discussion of the nature of inference by LLM's and by humans. "Confabulation" can be seen as an example of what Charles Sanders Peirce called "abduction," inference to the best explanation. Peirce showed how inference by abduction is a fruitful form of reasoning, but also unreliable - which aptly describes many LLM productions. This paper will then try out the legal implications, using some issues in the Uniform Commercial Code as test cases, for how shifting from "hallucination" to "confabulation" might influence how courts interpret AI reliability, defects, or warranty claims. There may be broader implications for the language we choose in tech law and policy.
Publication Date
4-29-2026
Recommended Citation
McJohn, Stephen M. and McJohn, Ian, "AI Mistakes: "Confabulation" and Abduction, not "Hallucination"" (2026). Suffolk University Law School Faculty Works. 396.
https://dc.suffolk.edu/suls-faculty/396
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License