model collapse

{{Short description|Degradation of AI models trained on synthetic data}}

Model collapse{{refn|group=note|Also known by other names, such as "AI Inbreeding",{{cite web|url=https://venturebeat.com/ai/generative-inbreeding-and-its-risk-to-human-culture/| title='Generative inbreeding' and its risk to human culture | date=26 August 2023 }}{{cite web | url=https://www.axios.com/2023/08/28/ai-content-flood-model-collapse | title=AI could choke on its own exhaust as it fills the web | date=28 August 2023 }} "AI Cannibalism",{{cite web | url=https://ctlj.colorado.edu/?p=1280 | title=AI Cannibalism and the Law – Colorado Technology Law Journal }}{{cite web | url=https://www.cogitotech.com/blog/the-curious-case-of-ai-cannibalism-and-possible-solutions/ | title=The Curious Case of AI Cannibalism & Possible Solutions | date=26 July 2023 }}, "Habsburg AI",{{Cite web |date=2024-08-05 |title=Inbred, gibberish or just MAD? Warnings rise about AI models |url=https://www.france24.com/en/live-news/20240805-inbred-gibberish-or-just-mad-warnings-rise-about-ai-models |access-date=2024-12-31 |website=France 24 |language=en}} and "Model Autophagy Disorder", abbreviated "MAD"{{cite web | url=https://livescu.ucla.edu/model-autophagy-disorder/ | title=Model Autophagy Disorder – the Livescu Initiative on Neuro, Narrative and AI }}{{cite web | url=https://www.tomshardware.com/news/generative-ai-goes-mad-when-trained-on-artificial-data-over-five-times | title=Generative AI Goes 'MAD' when Trained on AI-Created Data over Five Times | date=12 July 2023 }}{{cite arXiv | eprint=2307.01850 | last1=Alemohammad | first1=Sina | last2=Casco-Rodriguez | first2=Josue | last3=Luzi | first3=Lorenzo | author4=Ahmed Imtiaz Humayun | last5=Babaei | first5=Hossein | last6=LeJeune | first6=Daniel | last7=Siahkoohi | first7=Ali | last8=Baraniuk | first8=Richard G. | title=Self-Consuming Generative Models Go MAD | date=2023 | class=cs.LG }}}} is a phenomenon where machine learning models gradually degrade due to errors coming from uncurated training on the outputs of another model, such as prior versions of itself.{{Cite journal |last1=Shumailov |first1=Ilia |last2=Shumaylov |first2=Zakhar |last3=Zhao |first3=Yiren |last4=Papernot |first4=Nicolas |last5=Anderson |first5=Ross |last6=Gal |first6=Yarin |date=July 2024 |title=AI models collapse when trained on recursively generated data |journal=Nature |language=en |volume=631 |issue=8022 |pages=755–759 |doi=10.1038/s41586-024-07566-y |pmid=39048682 |issn=1476-4687 |pmc=11269175|bibcode=2024Natur.631..755S }}{{Cite arXiv |eprint=2305.17493 |class=cs.LG |first1=Ilia |last1=Shumailov |first2=Zakhar |last2=Shumaylov |title=The Curse of Recursion: Training on Generated Data Makes Models Forget |date=2023-05-31 |last3=Zhao |first3=Yiren |last4=Gal |first4=Yarin |last5=Papernot |first5=Nicolas |last6=Anderson |first6=Ross}}{{Cite news |last=Ozsevim |first=Ilkhan |date=2023-06-20 |title=Research finds ChatGPT & Bard headed for 'Model Collapse' |url=https://aimagazine.com/articles/research-finds-chatgpt-headed-for-model-collapse |access-date=2024-03-06 |language=en}}{{Cite web |last=Mok |first=Aaron |title=A disturbing AI phenomenon could completely upend the internet as we know it |url=https://www.businessinsider.com/ai-model-collapse-threatens-to-break-internet-2023-8 |access-date=2024-03-06 |website=Business Insider |language=en-US}} Such outputs are known as synthetic data. It is a possible mechanism for mode collapse.

Shumailov et al. coined the term and described two specific stages to the degradation: early model collapse and late model collapse:

  • In early model collapse, the model begins losing information about the tails of the distribution – mostly affecting minority data. Later work highlighted that early model collapse is hard to notice, since overall performance may appear to improve, while the model loses performance on minority data.{{Cite book |last1=Wyllie |first1=Sierra |last2=Shumailov |first2=Ilia |last3=Papernot |first3=Nicolas |chapter=Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias |date=2024-06-05 |title=The 2024 ACM Conference on Fairness, Accountability, and Transparency |chapter-url=https://dl.acm.org/doi/10.1145/3630106.3659029 |series=FAccT '24 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=2113–2147 |doi=10.1145/3630106.3659029 |isbn=979-8-4007-0450-5|arxiv=2403.07857 }}
  • In late model collapse, the model loses a significant proportion of its performance, confusing concepts and losing most of its variance.

Mechanism

Using synthetic data as training data can lead to issues with the quality and reliability of the trained model.{{cite arXiv |eprint=2307.01850 |class=cs.LG |first1=Sina |last1=Alemohammad |first2=Josue |last2=Casco-Rodriguez |title=Self-Consuming Generative Models Go MAD |date=July 4, 2023 |last3=Luzi |first3=Lorenzo |last4=Humayun |first4=Ahmed Imtiaz |last5=Babaei |first5=Hossein |last6=LeJeune |first6=Daniel |last7=Siahkoohi |first7=Ali |last8=Baraniuk |first8=Richard G.}}{{cite conference |title=Self-Consuming Generative Models Go MAD |url=https://openreview.net/forum?id=ShjMHfmPs0 |conference=The Twelfth International Conference on Learning Representations |language=English}} Model collapse occurs for three main reasons:

  1. functional approximation errors
  2. sampling errors
  3. learning errors

Importantly, it happens in even the simplest of models, where not all of the error sources are present. In more complex models the errors often compound, leading to faster collapse.

Disagreement over real-world impact

File:Model Collapse in Generative Models Can Be Avoided By Accumulating Data.png

Some researchers and commentators on model collapse warn that the phenomenon could fundamentally threaten future generative AI development: As AI-generated data is shared on the Internet, it will inevitably end up in future training datasets, which are often crawled from the Internet. If training on "slop" (large quantities of unlabeled synthetic data) inevitably leads to model collapse, this could therefore pose a difficult problem.{{cite web |title=What is Model Collapse and how to avoid it |url=https://www.theregister.com/2024/01/26/what_is_model_collapse/ |website=The Register |access-date=11 July 2024}}

However, recently, other researchers have disagreed with this argument, showing that if synthetic data accumulates alongside human-generated data, model collapse is avoided.{{Cite arXiv |eprint=2404.01413 |class=cs.LG |first1=Matthias |last1=Gerstgrasser |first2=Rylan |last2=Schaeffer |title=Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data |date=2024-04-01 |last3=Dey |first3=Apratim |last4=Rafailov |first4=Rafael |last5=Sleight |first5=Henry |last6=Hughes |first6=John |last7=Korbak |first7=Tomasz |last8=Agrawal |first8=Rajashree |last9=Pai |first9=Dhruv |last10=Gromov |first10=Andrey |last11=Roberts |first11=Daniel A. |last12=Yang |first12=Diyi |last13=Donoho |first13=David L. |last14=Koyejo |first14=Sanmi}} The researchers argue that data accumulating over time is a more realistic description of reality than deleting all existing data every year, and that the real-world impact of model collapse may not be as catastrophic as feared.{{cite web |title=Big brains divided over training AI with more AI: Is model collapse inevitable? |url=https://www.theregister.com/2024/05/09/ai_model_collapse/ |website=The Register |access-date=11 July 2024}}

An alternative branch of the literature investigates the use of machine learning detectors and watermarking to identify model generated data and filter it out.{{Cite journal |last1=Kirchenbauer |first1=John |last2=Geiping |first2=Jonas |last3=Wen |first3=Yuxin |last4=Katz |first4=Jonathan |last5=Miers |first5=Ian |last6=Goldstein |first6=Tom |date=2023-07-03 |title=A Watermark for Large Language Models |url=https://proceedings.mlr.press/v202/kirchenbauer23a.html |journal=Proceedings of the 40th International Conference on Machine Learning |language=en |publisher=PMLR |pages=17061–17084}}{{Cite web |date=2022-11-29 |title=My AI Safety Lecture for UT Effective Altruism |url=https://scottaaronson.blog/?p=6823 |access-date=2024-06-22 |website=Shtetl-Optimized |language=en-US}}

Mathematical models of the phenomenon

= 1D Gaussian model =

In 2024, a first attempt has been made at illustrating collapse for the simplest possible model — a single dimensional normal distribution fit using unbiased estimators of mean and variance, computed on samples from the previous generation.

To make this more precise, we say that original data follows a normal distribution X^0 \sim \mathcal{N}(\mu,\sigma^2), and we possess M_0 samples X^0_j for j \in {\{\, 1, \dots, M_0 \,{}\}}. Denoting a general sample X^i_j as sample j \in {\{\, 1, \dots, M_i \,{}\}} at generation i, then the next generation model is estimated using the sample mean and variance:

\mu_{i+1} = \frac{1}{M_i}\sum_j X^i_j; \quad \sigma_{i+1}^2 = \frac{1}{M_i-1}\sum _j(X^i_j-\mu_{i+1})^2.

Leading to a conditionally normal next generation model X^{i+1}_j|\mu_{i+1},\;\sigma_{i+1}\sim \mathcal{N}(\mu_{i+1},\sigma_{i+1}^2). In theory, this is enough to calculate the full distribution of X^i_j. However, even after the first generation, the full distribution is no longer normal: It follows a variance-gamma distribution.

To continue the analysis, instead of writing the probability density function at each generation, it is possible to explicitly construct them in terms of independent random variables using Cochran's theorem. To be precise, \mu_1 and \sigma_1are independent, with \mu_1 \sim \mathcal{N}\left(\mu, \frac{\sigma^2}{M_0}\right) and (M_0-1)\,\sigma_1^2 \sim \sigma^2\,\Gamma\left(\frac{M_0-1}{2}, \frac12\right), following a Gamma distribution. Denoting with Z Gaussian random variables distributed according to \mathcal{N}(0, 1) and with S^i random variables distributed with \frac{1}{M_{i-1}-1}\Gamma\left(\frac{M_{i-1}-1}{2}, \frac12\right), it turns out to be possible to write samples at each generation as

X^0_j = \mu + \sigma Z^0_j,

X^1_j = \mu + \frac{\sigma}{\sqrt{M_0}}Z^1 + \sigma\sqrt{S^1}Z^1_j,

and more generally

X^n_j = \mu + \frac{\sigma}{\sqrt{M_0}}Z^1 + \frac{\sigma}{\sqrt{M_1}}\sqrt{S^1}Z^2 + \dots

+ \frac{\sigma}{\sqrt{M_{n-1}}}\sqrt{S^1\times\dots\times S^{n-1}}Z^n+\sigma\sqrt{S^1\times\dots\times S^{n}}Z^n_j.

Note, that these are not joint distributions, as Z^n and S^n depend directly on Z^{n-1}_j, but when considering X^n_j on its own the formula above provides all the information about the full distribution.

To analyse the model collapse, we can first calculate variance and mean of samples at generation n. This would tell us what kind of distributions we expect to arrive at after n

generations. It is possible to find its exact value in closed form, but the mean and variance of the square root of gamma distribution are expressed in terms of gamma functions, making the result quite clunky. Following, it is possible to expand all results to second order in each of 1/M_i, assuming each sample size to be large. It is then possible to show that

\frac{1}{\sigma^2}\operatorname{Var}(X^n_j) = \frac{1}{M_0}+\frac{1}{M_1}+ \dots + \frac{1}{M_{n-1}}+1 + \mathcal{O}\left(M_i^{-2}\right).

And if all sample sizes M_i = M are constant, this diverges linearly as n\to\infty:

\operatorname{Var}(X^n_j) = \sigma^2\left(1+\frac{n}{M}\right); \quad \mathbb{E}(X^n_j) = \mu.

This is the same scaling as for a single dimensional Gaussian random walk. However, divergence of the variance of X^n_j does not directly provide any information about the corresponding estimates of \mu_{n+1} and \sigma_{n+1}, particularly how different they are from the original \mu and \sigma. It turns out to be possible to calculate the distance between the true distribution and the approximated distribution at step n+1, using the Wasserstein-2 distance (which is also sometimes referred to as risk):

\mathbb{E}\left[\mathbb{W}^2_2\left(\mathcal{N}(\mu,\sigma^2),\mathcal{N}(\mu_{n+1},\sigma^2_{n+1})\right)\right]=\frac{3}{2}\sigma^2\left(\frac{1}{M_0}+\frac{1}{M_1}+ \dots + \frac{1}{M_{n}}\right)+\mathcal{O}\left(M_i^{-2}\right),

\operatorname{Var}\left[\mathbb{W}^2_2\left(\mathcal{N}(\mu,\sigma^2),\mathcal{N}(\mu_{n+1},\sigma^2_{n+1})\right)\right]=\frac{1}{2}\sigma^4\left(\frac{3}{M_0^2}+\frac{3}{M_1^2}+ \dots + \frac{3}{M_{n}^2} + \sum_{i\neq j}\frac{4}{M_iM_j}\right)+\mathcal{O}\left(M_i^{-3}\right).

This directly shows why model collapse occurs in this simple model. Due to errors from re-sampling the approximated distribution, each generation ends up corresponding to a new step in a random walk of model parameters. For a constant sample size at each generation, the average distance from the starting point diverges, and in order for the end distribution approximation to be accurate, or for the distance to be finite, the sampling rate M_i needs to increase superlinearly, i.e. one needs to collect increasingly more samples over time, perhaps quadratically. However, even in that case the expected distance after n steps remains non-zero and the only case in which it does in fact end up being zero is when sampling is infinite at each step. Overall, this only shows us how far on average one ends up from the original distribution, but the process can only "terminate", if the estimated variance at a certain generation becomes small enough, effectively turning the distribution into a delta function. This is shown to occur for a general gaussian model in the subsection below. Empirical investigation has confirmed this theoretical analysis.{{cite arXiv |last1=Borji |first1=Ali |title=A Note on Shumailov et al. (2024): "AI Models Collapse When Trained on Recursively Generated Data" |date=2024-10-16 |eprint=2410.12954 |class=cs.LG}}

= N-D Gaussian model =

Furthermore, in the case of multidimensional model with fully synthetic data, exact collapse can be shown.

= Linear regression =

In the case of a linear regression model,{{Cite arXiv |last1=Dohmatob |first1=Elvis |last2=Feng |first2=Yunzhen |last3=Kempe |first3=Julia |date=2024-02-12 |title=Model Collapse Demystified: The Case of Regression |class=cs.LG |eprint =2402.07712}}{{cite arXiv |last1=Dohmatob |first1=Elvis |title=A Tale of Tails: Model Collapse as a Change of Scaling Laws |date=2024-02-10 |eprint =2402.07043 |last2=Feng |first2=Yunzhen |last3=Yang |first3=Pu |last4=Charton |first4=Francois |last5=Kempe |first5=Julia|class=cs.LG }} scaling laws and bounds on learning can be obtained.

= Statistical language model =

In the case of a linear softmax classifier for next token prediction,{{cite arXiv |last1=Seddik |first1=Mohamed El Amine |title=How Bad is Training on Synthetic Data? A Statistical Analysis of Language Model Collapse |date=2024-04-07|eprint = 2404.05090 |last2=Chen |first2=Suei-Wen |last3=Hayou |first3=Soufiane |last4=Youssef |first4=Pierre |last5=Debbah |first5=Merouane|class=cs.LG }} exact bounds on learning with even a partially synthetic dataset can be obtained.

Impact on large language models

In the context of large language models, research found that training LLMs on predecessor-generated text — language models are trained on the synthetic data produced by previous models — causes a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity.{{cite arXiv |last1=Guo |first1=Yanzhu |title=The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text |date=2024-04-16 |eprint=2311.09807 |last2=Shang |first2=Guokan |last3=Vazirgiannis |first3=Michalis |last4=Clavel |first4=Chloé|class=cs.CL }}

See also

Notes

{{Reflist|group=note}}

References