confabulation (neural networks)
A confabulation, also known as a false, degraded, or corrupted memory, is a stable pattern of activation in an artificial neural network or neural assembly that does not correspond to any previously learned patterns. The same term is also applied to the (nonartificial) neural mistake-making process leading to a false memory (confabulation).
Cognitive science
In cognitive science, the generation of confabulatory patterns is symptomatic of some forms of brain trauma.{{cite book |doi=10.7551/mitpress/2162.001.0001 |oclc=42328595 |title=Conversations in the Cognitive Neurosciences |date=1996 |isbn=978-0-262-28689-3 |editor-last1=Gazzaniga |editor-first1=Michael S. }}{{pn|date=April 2025}} In this, confabulations relate to pathologically induced neural activation patterns depart from direct experience and learned relationships. In computational modeling of such damage, related brain pathologies such as dyslexia and hallucination result from simulated lesioning{{cite journal |last1=Plaut |first1=David C. |last2=Shallice |first2=Tim |title=Deep dyslexia: A case study of connectionist neuropsychology |journal=Cognitive Neuropsychology |date=November 1993 |volume=10 |issue=5 |pages=377–500 |doi=10.1080/02643299308253469 |oclc=4645580590 }} and neuron death.{{cite journal |last1=Yam |first1=Philip |title=Daisy, Daisy |journal=Scientific American |date=May 1993 |volume=268 |issue=5 |pages=32–33 |doi=10.1038/scientificamerican0593-32 }} Forms of confabulation in which missing or incomplete information is incorrectly filled in by the brain are generally modelled by the well known neural network process called pattern completion.{{cite web|url=http://www.rni.org/ftsommer/FS/neural_associative_memory.html|title=Neural associative memory|work=rni.org|access-date=2009-07-22|archive-url=https://web.archive.org/web/20081020063141/http://www.rni.org/ftsommer/FS/neural_associative_memory.html|archive-date=2008-10-20|url-status=dead}}{{self-published inline|date=April 2025}}
Neural networks
{{Also|Hallucination (artificial intelligence)}}
Confabulation is central to a theory of cognition and consciousness by S. L. Thaler in which thoughts and ideas originate in both biological and synthetic neural networks as false or degraded memories nucleate upon various forms of neuronal and synaptic fluctuations and damage.{{cite patent |country=US |number=5659666A |inventor=Thaler, Stephen L |status=granted |title=Device for the autonomous generation of useful information |pubdate= |gdate=19 August 1997 }}Thaler, S. L. (1997b). [http://imagination-engines.com/iei_seminal_cognition.php "A Quantitative Model of Seminal Cognition: the creativity machine paradigm"], Proceedings of the Mind II Conference, Dublin, Ireland, 1997. Such novel patterns of neural activation are promoted to ideas as other neural nets perceive utility or value to them (i.e., the thalamo-cortical loop).{{cite journal |last1=Thaler |first1=Stephen L. |title=The creativity machine paradigm: Withstanding the argument from consciousness |journal=APA Newsletters: APA Newsletter on Philosophy and Computers |volume=11 |issue=2 |date=2012 |pages=19–30 |url=https://cdn.ymaws.com/www.apaonline.org/resource/collection/EADE8D52-8D02-4136-9A2A-729368501E43/v11n2_Computers.pdf }}{{cite book |doi=10.1007/978-1-4614-3858-8_396 |chapter=Creativity Machine® Paradigm |title=Encyclopedia of Creativity, Invention, Innovation and Entrepreneurship |date=2013 |last1=Thaler |first1=Stephen |pages=447–456 |isbn=978-1-4614-3857-1 }} The exploitation of these false memories by other artificial neural networks forms the basis of inventive artificial intelligence systems currently utilized in product design,{{cite book |last1=Pickover |first1=Clifford A. |title=Sex, Drugs, Einstein, and Elves: Sushi, Psychedelics, Parallel Universes, and the Quest for Transcendence |date=2005 |publisher=Smart Publications |isbn=978-1-890572-17-4 }}{{pn|date=April 2025}}{{cite book |last1=Plotkin |first1=Robert |title=The Genie in the Machine: How Computer-Automated Inventing Is Revolutionizing Law and Business |date=2009 |publisher=Stanford University Press |isbn=978-0-8047-5699-0 }}{{pn|date=April 2025}} materials discovery{{cite journal |last1=Thaler |first1=Stephen L. |title=Predicting ultra-hard binary compounds via cascaded auto- and hetero-associative neural networks |journal=Journal of Alloys and Compounds |date=September 1998 |volume=279 |issue=1 |pages=47–59 |doi=10.1016/S0925-8388(98)00611-2 }} and improvisational military robots.{{cite news |id={{ProQuest|402356128}} |last1=Hesman |first1=Tina |title=The Machine That Invents |newspaper=St. Louis Post-Dispatch |date=25 January 2004 |page=A.1 }} Compound, confabulatory systems of this kind{{cite book |last1=Thaler |first1=S. L. |chapter=A Proposed Symbolism for Network-Implemented Discovery Processes |pages=1265–1268 |chapter-url={{GBurl|alfctLeoDYoC|p=1265}} |title=WCNN'96, San Diego, California, U.S.A.: World Congress on Neural Networks : International Neural Network Society 1996 Annual Meeting : The Town & Country Hotel San Diego, California, U.S.A., September 15-18, 1996 |date=1996 |publisher=Psychology Press |isbn=978-0-8058-2608-1 }} have been used as sensemaking systems for military intelligence and planning, self-organizing control systems for robots and space vehicles,{{cite book |doi=10.1109/AERO.2007.352649 |chapter=Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations |title=2007 IEEE Aerospace Conference |date=2007 |last1=Patrick |first1=M. Clinton |last2=Thaler |first2=Stephen L. |last3=Stevenson-Chavis |first3=Katherine |pages=1–6 |isbn=978-1-4244-0524-4 }} and entertainment. The concept of such opportunistic confabulation grew out of experiments with artificial neural networks that simulated brain cell apoptosis.{{cite journal |last1=Yam |first1=Philip |title=As they Lay Dying |journal=Scientific American |date=May 1995 |volume=272 |issue=5 |pages=24–25 |doi=10.1038/scientificamerican0595-24b |bibcode=1995SciAm.272e..24Y }} It was discovered that novel perception, ideation, and motor planning could arise from either reversible or irreversible neurobiological damage.{{cite journal |last1=Thaler |first1=S. L. |title=Death of a gedanken creature |journal=Journal of Near-Death Studies |volume=13 |issue=3 |date=Spring 1995 |pages=149–166 |oclc=197953879 }}{{cite book |doi=10.1007/978-3-319-15347-6_396 |chapter=Creativity Machine® Paradigm |title=Encyclopedia of Creativity, Invention, Innovation and Entrepreneurship |date=2020 |last1=Thaler |first1=Stephen |pages=650–658 |isbn=978-3-319-15346-9 }}
Large language models
In March 2023, technology journalist Benj Edwards proposed "confabulation" as a more accurate alternative to "hallucination" for describing factual errors generated by large language models (LLMs) like those used with ChatGPT.{{cite news |last1=Edwards |first1=Benj |title=Why ChatGPT and Bing Chat are so good at making things up |url=https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ |work=Ars Technica |date=6 April 2023 }}{{cite journal |last1=Hicks |first1=Michael Townsen |last2=Humphries |first2=James |last3=Slater |first3=Joe |title=ChatGPT is bullshit |journal=Ethics and Information Technology |date=June 2024 |volume=26 |issue=2 |doi=10.1007/s10676-024-09775-5 }}{{cite book |doi=10.18653/v1/2023.emnlp-main.291 |arxiv=2312.03729 |chapter=Cognitive Dissonance: Why do Language Model Outputs Disagree with Internal Representations of Truthfulness? |title=Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing |date=2023 |last1=Liu |first1=Kevin |last2=Casper |first2=Stephen |last3=Hadfield-Menell |first3=Dylan |last4=Andreas |first4=Jacob |pages=4791–4797 }} Edwards argued that in the context of LLMs, "confabulation" better captures the "creative gap-filling principle" at work when these models generate plausible-sounding but factually incorrect information without implying deception.{{fact|date=April 2025}}
Unlike the term "hallucination," which suggests perceiving something that isn't there, "confabulation" describes how LLMs fill in missing information with fabricated content that appears coherent and convincing. As Edwards noted, "In human psychology, a 'confabulation' occurs when someone's memory has a gap and the brain convincingly fills in the rest without intending to deceive others." While LLMs don't function like human brains, this metaphor helps explain how these models produce false information that appears credible within their generated text.{{fact|date=April 2025}}
The term has since gained traction in technical literature,{{cite book |doi=10.18653/v1/2024.acl-long.770 |arxiv=2406.04175 |chapter=Confabulation: The Surprising Value of Large Language Model Hallucinations |title=Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |date=2024 |last1=Sui |first1=Peiqi |last2=Duede |first2=Eamon |last3=Wu |first3=Sophie |last4=So |first4=Richard |pages=14274–14284 }} and also in clinical discourse{{cite journal |last1=Smith |first1=Andrew L. |last2=Greaves |first2=Felix |last3=Panch |first3=Trishan |title=Hallucination or Confabulation? Neuroanatomy as metaphor in Large Language Models |journal=PLOS Digital Health |date=November 2023 |volume=2 |issue=11 |pages=e0000388 |doi=10.1371/journal.pdig.0000388 |doi-access=free |pmid=37910473 |pmc=10619792 }} where researchers argue it more accurately describes how LLMs generate plausible but incorrect information without implying sensory perception or consciousness.
Computational inductive reasoning
The term confabulation is also used by Robert Hecht-Nielsen in describing inductive reasoning accomplished via Bayesian networks.{{cite journal |last1=Hecht-Nielsen |first1=Robert |title=Cogent confabulation |journal=Neural Networks |date=March 2005 |volume=18 |issue=2 |pages=111–115 |doi=10.1016/j.neunet.2004.11.003 |pmid=15795109 }} Confabulation is used to select the expectancy of the concept that follows a particular context. This is not an Aristotelian deductive process, although it yields simple deduction when memory only holds unique events. However, most events and concepts occur in multiple, conflicting contexts and so confabulation yields a consensus of an expected event that may only be minimally more likely than many other events. However, given the winner take all constraint of the theory, that is the event/symbol/concept/attribute that is then expected. This parallel computation on many contexts is postulated to occur in less than a tenth of a second. Confabulation grew out of vector analysis of data retrieval like that of latent semantic analysis and support vector machines. It is being implemented computationally on parallel computers.{{fact|date=April 2025}}