Document Type

Working Paper

Abstract

"Hallucination" has become the common term for errors by AI systems, yet it implies a misleading analogy to human perception. LLM's process tokens. They do not have conscious experience or conscious perception. A hallucination is an experience, and (to our knowledge, to date) LLM's do not experience anything. This paper suggests that legal scholars follow the lead of a small number of AI researchers who have suggested that "confabulation" is a more accurate term, a metaphor grounded in psychology. People confabulate when they unknowingly invent spurious explanations or facts. We then take this terminological question and stretch it into a discussion of the nature of inference by LLM's and by humans. "Confabulation" can be seen as an example of what Charles Sanders Peirce called "abduction," inference to the best explanation. Peirce showed how inference by abduction is a fruitful form of reasoning, but also unreliable - which aptly describes many LLM productions. This paper will then try out the legal implications, using some issues in the Uniform Commercial Code as test cases, for how shifting from "hallucination" to "confabulation" might influence how courts interpret AI reliability, defects, or warranty claims. There may be broader implications for the language we choose in tech law and policy.

Publication Date

4-29-2026

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Find on SSRN

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.