Stochastic parrot

{{Short description|Term used in machine learning}}

In machine learning, the term stochastic parrot is a disparaging metaphor, introduced by Emily M. Bender and colleagues in a 2021 paper, that frames large language models as systems that statistically mimic text without real understanding.{{cite conference |last1=Bender |first1=Emily M. |last2=Gebru |first2=Timnit |last3=McMillan-Major |first3=Angelina |last4=Mitchell |first4=Margaret |title=On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? |book-title=Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency |year=2021 |doi=10.1145/3442188.3445922}}

Subsequent research and expert commentary, including large-scale benchmark studies and analysis by Geoffrey Hinton, have challenged this metaphor by documenting emergent reasoning and problem-solving abilities in modern LLMs.{{cite arXiv |eprint=2303.12712 |first1=Sébastien |last1=Bubeck |title=Sparks of Artificial General Intelligence: Early experiments with GPT-4 |year=2023|class=cs.CL }}{{cite news |last=Pelley |first=Scott |date=8 October 2023 |title="Godfather of Artificial Intelligence" Geoffrey Hinton on the promise, risks of advanced AI |url=https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/ |access-date=2 July 2025 |work=CBS News}}

Origin and definition

The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"). They argued that large language models (LLMs) present dangers such as environmental and financial costs, inscrutability leading to unknown dangerous biases, and potential for deception, and that they can't understand the concepts underlying what they learn.{{Cite web |last=Hao |first=Karen |date=4 December 2020 |title=We read the paper that forced Timnit Gebru out of Google. Here's what it says. |url=https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/ |url-status=live |archive-url=https://web.archive.org/web/20211006233625/https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/ |archive-date=6 October 2021 |access-date=19 January 2022 |website=MIT Technology Review |language=en}}

= Etymology =

{{see also|Stochastic process#Etymology}}

The word "stochastic"{{snd}}from the ancient Greek "{{Transliteration|grc|stokhastikos}}" ('based on guesswork'){{snd}}is a term from probability theory meaning "randomly determined". The word "parrot" refers to parrots' ability to mimic human speech, without understanding its meaning.

= Purpose =

In their paper, Bender et al. argue that LLMs are probabilistically linking words and sentences together without considering meaning. Therefore, they are labeled to be mere "stochastic parrots". According to the machine learning professionals Lindholm, Wahlström, Lindsten, and Schön, the analogy highlights two vital limitations:{{sfn|Lindholm|Wahlström|Lindsten|Schön|2022|pp=322–3}}{{Cite news |last=Uddin |first=Muhammad Saad |date=April 20, 2023 |title=Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations |url=https://towardsai.net/p/machine-learning/stochastic-parrots-a-novel-look-at-large-language-models-and-their-limitations |access-date=2023-05-12 |website=Towards AI |language=en-US}}

  • LLMs are limited by the data they are trained by and are simply stochastically repeating contents of datasets.
  • Because they are just making up outputs based on training data, LLMs do not understand if they are saying something incorrect or inappropriate.

Lindholm et al. noted that, with poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".{{sfn|Lindholm|Wahlström|Lindsten|Schön|2022|pp=322–3}}

Subsequent usage

Stochastic parrot is now a neologism used by AI skeptics to allege that LLM's lack understanding of the meaning of their outputs and is sometimes interpreted as a "slur against AI".{{Cite news |last=Zimmer |first=Ben |date=2024-01-18 |title='Stochastic Parrot': A Name for AI That Sounds a Bit Less Intelligent |url=https://www.wsj.com/arts-culture/books/stochastic-parrot-a-name-for-ai-that-sounds-a-bit-less-intelligent-789372f5 |access-date=2024-04-01 |work=Wall Street Journal |language=en-US}} Its use expanded further when Sam Altman, CEO of Open AI, used the term ironically when he tweeted, "i am a stochastic parrot and so r u", pointing out that, by the same reasoning, one could also disparage humans as mere next-word predictors whose brains (or pens) simply generate statistically likely sequences. The term was then designated to be the 2023 AI-related Word of the Year for the American Dialect Society, even over the words "ChatGPT" and "LLM".{{Cite news |last=Corbin |first=Sam |date=2024-01-15 |title=Among Linguists, the Word of the Year Is More of a Vibe |url=https://www.nytimes.com/2024/01/15/crosswords/linguistics-word-of-the-year.html |access-date=2024-04-01 |work=The New York Times |language=en-US |issn=0362-4331}}

The phrase is often referenced by some researchers to describe LLMs as pattern matchers that can generate plausible human-like text through their vast amount of training data, merely parroting in a stochastic fashion. However, other researchers argue that LLMs are, in fact, at least partially able to understand language.{{Cite journal |last=Arkoudas |first=Konstantine |date=2023-08-21 |title=ChatGPT is no Stochastic Parrot. But it also Claims that 1 is Greater than 1 |url=https://doi.org/10.1007/s13347-023-00619-6 |journal=Philosophy & Technology |language=en |volume=36 |issue=3 |article-number=54 |doi=10.1007/s13347-023-00619-6 |issn=2210-5441|url-access=subscription }}

Debate

Some LLMs, such as ChatGPT, have become capable of interacting with users in convincingly human-like conversations. The development of these new systems has deepened the discussion of the extent to which LLMs understand or are simply "parroting".

= Subjective experience =

In the mind of a human being, words and language correspond to things one has experienced.{{Cite journal |last=Fayyad |first=Usama M. |date=2023-05-26 |title=From Stochastic Parrots to Intelligent Assistants—The Secrets of Data and Human Interventions |url=https://ieeexplore.ieee.org/document/10148666 |journal=IEEE Intelligent Systems |volume=38 |issue=3 |pages=63–67 |doi=10.1109/MIS.2023.3268723 |issn=1541-1672|url-access=subscription }} For LLMs, words may correspond only to other words and patterns of usage fed into their training data.{{Cite book |last=Saba |first=Walid S. |chapter=Stochastic LLMS do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMS |series=Lecture Notes in Computer Science |date=2023 |volume=14320 |editor-last=Almeida |editor-first=João Paulo A. |editor2-last=Borbinha |editor2-first=José |editor3-last=Guizzardi |editor3-first=Giancarlo |editor4-last=Link |editor4-first=Sebastian |editor5-last=Zdravkovic |editor5-first=Jelena |title=Conceptual Modeling |language=en |location=Cham |publisher=Springer Nature Switzerland |pages=3–19 |doi=10.1007/978-3-031-47262-6_1 |arxiv=2309.05918 |isbn=978-3-031-47262-6}}{{Cite journal |last1=Mitchell |first1=Melanie |last2=Krakauer |first2=David C. |date=2023-03-28 |title=The debate over understanding in AI's large language models |journal=Proceedings of the National Academy of Sciences |language=en |volume=120 |issue=13 |pages=e2215907120 |doi=10.1073/pnas.2215907120 |doi-access=free |issn=0027-8424 |pmc=10068812 |pmid=36943882|arxiv=2210.13966 |bibcode=2023PNAS..12015907M }} Proponents of the idea of stochastic parrots thus conclude that LLMs are incapable of actually understanding language.

= Hallucinations and mistakes =

The tendency of LLMs to pass off fake information as fact is held as support. Called hallucinations or confabulations, LLMs will occasionally synthesize information that matches some pattern, but not reality. That LLMs can’t distinguish fact and fiction leads to the claim that they can’t connect words to a comprehension of the world, as language should do. Further, LLMs often fail to decipher complex or ambiguous grammar cases that rely on understanding the meaning of language. As an example, borrowing from Saba et al., is the prompt:

{{blockquote|The wet newspaper that fell down off the table is my favorite newspaper. But now that my favorite newspaper fired the editor I might not like reading it anymore. Can I replace ‘my favorite newspaper’ by ‘the wet newspaper that fell down off the table’ in the second sentence?}}

Some LLMs respond to this in the affirmative, not understanding that the meaning of "newspaper" is different in these two contexts; it is first an object and second an institution. Based on these failures, some AI professionals conclude they are no more than stochastic parrots.

= Benchmarks and experiments =

One argument against the hypothesis that LLMs are stochastic parrot is their results on benchmarks for reasoning, common sense and language understanding. In 2023, some LLMs have shown good results on many language understanding tests, such as the Super General Language Understanding Evaluation (SuperGLUE).{{Cite arXiv |last1=Wang |first1=Alex |last2=Pruksachatkun |first2=Yada |last3=Nangia |first3=Nikita |last4=Singh |first4=Amanpreet |last5=Michael |first5=Julian |last6=Hill |first6=Felix |last7=Levy |first7=Omer |last8=Bowman |first8=Samuel R. |date=2019-05-02 |title=SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems |class=cs.CL |language=en |eprint=1905.00537}} GPT-4 scored in the {{nowrap|>90th-percentile}} on the Uniform Bar Examination and achieved 93% accuracy on the MATH benchmark of high-school Olympiad problems, results that exceed rote pattern-matching expectations. Such tests, and the smoothness of many LLM responses, help as many as 51% of AI professionals believe they can truly understand language with enough data, according to a 2022 survey.

= Expert rebuttals =

Leading AI researchers dispute the notion that LLMs merely “parrot” their training data.

  • Geoffrey Hinton argues that “to predict the next word accurately you have to understand the sentence”, a view he presented on 60 Minutes in 2023. He also uses logical puzzles to demonstrate that LLMs actually understand language.{{Cite AV media |url=https://www.youtube.com/watch?v=qrvK_KuIeJk |title="Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview |date=2023-10-09 |last=60 Minutes |access-date=2025-07-02 |via=YouTube}}
  • A 2024 Scientific American investigation described a closed Berkeley workshop where state-of-the-art models solved novel tier-4 mathematics problems and produced coherent proofs, indicating reasoning abilities beyond memorization.{{cite magazine |last=Morris |first=Ian |title=Inside the secret meeting where mathematicians struggled to outsmart AI |url=https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/ |magazine=Scientific American |date=24 March 2024 |access-date=2 July 2025}}
  • The GPT-4 Technical Report showed human-level results on professional and academic exams (e.g., the Uniform Bar Exam and USMLE), challenging the “parrot” characterization.{{cite arXiv |eprint=2303.08774 |title=GPT-4 Technical Report |date=2023 |author1=OpenAI |last2=Achiam |first2=Josh |last3=Adler |first3=Steven |last4=Agarwal |first4=Sandhini |last5=Ahmad |first5=Lama |last6=Akkaya |first6=Ilge |author7=Florencia Leoni Aleman |last8=Almeida |first8=Diogo |last9=Altenschmidt |first9=Janko |last10=Altman |first10=Sam |last11=Anadkat |first11=Shyamal |last12=Avila |first12=Red |last13=Babuschkin |first13=Igor |last14=Balaji |first14=Suchir |last15=Balcom |first15=Valerie |last16=Baltescu |first16=Paul |last17=Bao |first17=Haiming |last18=Bavarian |first18=Mohammad |last19=Belgum |first19=Jeff |last20=Bello |first20=Irwan |last21=Berdine |first21=Jake |last22=Bernadett-Shapiro |first22=Gabriel |last23=Berner |first23=Christopher |last24=Bogdonoff |first24=Lenny |last25=Boiko |first25=Oleg |last26=Boyd |first26=Madelaine |last27=Brakman |first27=Anna-Luisa |last28=Brockman |first28=Greg |last29=Brooks |first29=Tim |last30=Brundage |first30=Miles |display-authors=1 |class=cs.CL }}

= Interpretability =

Another technique for investigating if LLMs can understand is termed "mechanistic interpretability". The idea is to reverse-engineer a large language model to analyze how it internally processes the information.

One example is Othello-GPT, where a small transformer was trained to predict legal Othello moves. It has been found that this model has an internal representation of the Othello board, and that modifying this representation changes the predicted legal Othello moves in the correct way. This supports the idea that LLMs have a "world model", and are not just doing superficial statistics.{{Citation |last1=Li |first1=Kenneth |title=Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task |date=2023-02-27 |arxiv=2210.13382 |last2=Hopkins |first2=Aspen K. |last3=Bau |first3=David |last4=Viégas |first4=Fernanda |last5=Pfister |first5=Hanspeter |last6=Wattenberg |first6=Martin}}{{Cite web |last=Li |first=Kenneth |date=2023-01-21 |title=Large Language Model: world models or surface statistics? |url=https://thegradient.pub/othello/ |access-date=2024-04-04 |website=The Gradient |language=en}}

In another example, a small transformer was trained on computer programs written in the programming language Karel. Similar to the Othello-GPT example, this model developed an internal representation of Karel program semantics. Modifying this representation results in appropriate changes to the output. Additionally, the model generates correct programs that are, on average, shorter than those in the training set.{{Citation |last1=Jin |first1=Charles |title=Evidence of Meaning in Language Models Trained on Programs |date=2023-05-24 |arxiv=2305.11169 |last2=Rinard |first2=Martin}}

Researchers also studied "grokking", a phenomenon where an AI model initially memorizes the training data outputs, and then, after further training, suddenly finds a solution that generalizes to unseen data.{{Cite web |last=Schreiner |first=Maximilian |date=2023-08-11 |title=Grokking in machine learning: When Stochastic Parrots build models |url=https://the-decoder.com/grokking-in-machine-learning-when-stochastic-parrots-build-models/ |access-date=2024-05-25 |website=the decoder |language=en-US}}

= Shortcuts to reasoning =

However, when tests created to test people for language comprehension are used to test LLMs, they sometimes result in false positives caused by spurious correlations within text data.{{Citation |last1=Choudhury |first1=Sagnik Ray |title=Machine Reading, Fast and Slow: When Do Models "Understand" Language? |date=2022-09-15 |arxiv=2209.07430 |last2=Rogers |first2=Anna |last3=Augenstein |first3=Isabelle}} Models have shown examples of shortcut learning, which is when a system makes unrelated correlations within data instead of using human-like understanding.{{Cite journal |last1=Geirhos |first1=Robert |last2=Jacobsen |first2=Jörn-Henrik |last3=Michaelis |first3=Claudio |last4=Zemel |first4=Richard |last5=Brendel |first5=Wieland |last6=Bethge |first6=Matthias |last7=Wichmann |first7=Felix A. |date=2020-11-10 |title=Shortcut learning in deep neural networks |url=https://www.nature.com/articles/s42256-020-00257-z |journal=Nature Machine Intelligence |language=en |volume=2 |issue=11 |pages=665–673 |doi=10.1038/s42256-020-00257-z |arxiv=2004.07780 |issn=2522-5839}} One such experiment conducted in 2019 tested Google’s BERT LLM using the argument reasoning comprehension task. BERT was prompted to choose between 2 statements, and find the one most consistent with an argument. Below is an example of one of these prompts:{{Citation |last1=Niven |first1=Timothy |title=Probing Neural Network Comprehension of Natural Language Arguments |date=2019-09-16 |arxiv=1907.07355 |last2=Kao |first2=Hung-Yu}}

{{poemquote|Argument: Felons should be allowed to vote. A person who stole a car at 17 should not be barred from being a full citizen for life.

Statement A: Grand theft auto is a felony.

Statement B: Grand theft auto is not a felony.}}

Researchers found that specific words such as "not" hint the model towards the correct answer, allowing near-perfect scores when included but resulting in random selection when hint words were removed. This problem, and the known difficulties defining intelligence, causes some to argue all benchmarks that find understanding in LLMs are flawed, that they all allow shortcuts to fake understanding.

See also

References

{{Reflist}}

=Works cited=

  • {{cite book |last1=Lindholm |first1=A. |last2=Wahlström |first2=N. |last3=Lindsten |first3=F. |last4=Schön |first4=T. B. |year=2022 |title=Machine Learning: A First Course for Engineers and Scientists |publisher=Cambridge University Press |isbn=978-1108843607}}
  • {{cite AV media |first=Adrian |last=Weller |date=July 13, 2021 |title=On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 |type=video |url=https://www.youtube.com/watch?v=N5c2X8vhfBE |publisher=Alan Turing Institute}} Keynote by Emily Bender. The presentation was followed by a panel discussion.

Further reading

  • {{cite magazine |title=ChatGPT Is Dumber Than You Think: Treat it like a toy, not a tool |first=Ian |last=Bogost |date=December 7, 2022 |magazine=The Atlantic |url=https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/ |access-date=2024-01-17}}
  • {{cite news |last=Chomsky |first=Noam |date=March 8, 2023 |title=The False Promise of ChatGPT |newspaper=The New York Times |url=https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html |access-date=2024-01-17}}
  • {{cite magazine |title=It takes a body to understand the world – why ChatGPT and other language AIs don't know what they're saying

|date=April 6, 2023 |last1=Glenberg |first1=Arthur |last2=Jones |first2=Cameron Robert |magazine=The Conversation |url=https://theconversation.com/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-language-ais-dont-know-what-theyre-saying-201280 |access-date=2024-01-17}}

  • {{cite book |last=McQuillan |first=D. |year=2022 |title=Resisting AI: An Anti-fascist Approach to Artificial Intelligence |title-link=Resisting AI |publisher=Bristol University Press |isbn=978-1-5292-1350-8}}
  • {{cite book |last=Thompson |first=E. |year=2022 |title=Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do about It |publisher=Basic Books |isbn=978-1-5416-0098-0}}
  • {{cite arXiv |title=Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT |first1=Qihuang |last1=Zhong |first2=Liang |last2=Ding |first3=Juhua |last3=Liu |first4=Bo |last4=Du |first5=Dacheng |last5=Tao |year=2023 |class=cs.CL |eprint=2302.10198}}