P(doom)

{{Short description|Term in artificial intelligence}}

P(doom) is a term in AI safety that refers to the probability of existentially catastrophic outcomes (or "doom") as a result of artificial intelligence.{{Cite web |last=Thomas |first=Sean |date=2024-03-04 |title=Are we ready for P(doom)? |url=https://www.spectator.co.uk/article/are-we-ready-for-pdoom/ |access-date=2024-06-19 |website=The Spectator |language=en-US}} The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton{{Cite news |last=Metz |first=Cade |date=2023-05-01 |title='The Godfather of A.I.' Leaves Google and Warns of Danger Ahead |url=https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html |access-date=2024-06-19 |work=The New York Times |language=en-US |issn=0362-4331}} and Yoshua Bengio{{Cite news |title=One of the "godfathers of AI" airs his concerns |url=https://www.economist.com/by-invitation/2023/07/21/one-of-the-godfathers-of-ai-airs-his-concerns |access-date=2024-06-19 |newspaper=The Economist |issn=0013-0613}} began to warn of the risks of AI. In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The mean value from the responses was 14.4%, with a median value of 5%.{{Cite web |last=Piper |first=Kelsey |date=2024-01-10 |title=Thousands of AI experts are torn about what they've created, new study finds |url=https://www.vox.com/future-perfect/2024/1/10/24032987/ai-impacts-survey-artificial-intelligence-chatgpt-openai-existential-risk-superintelligence |access-date=2024-09-02 |website=Vox |language=en-US}}{{br}} Citing:{{Br}}{{Cite web |title=2023 Expert Survey on Progress in AI [AI Impacts Wiki] |url=https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai |access-date=2024-09-02 |website=wiki.aiimpacts.org}}

Notable P(doom) values

{{unreliable sources|section|date=May 2025}}

class="wikitable sortable"

!Name

! data-sort-type=number | P(doom)

!Notes

|{{sortname|Elon|Musk}}

|data-sort-value="15"|{{circa|10–30%}}{{Cite web |last=Tangalakis-Lippert |first=Katherine |title=Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway |url=https://www.businessinsider.com/elon-musk-20-percent-chance-ai-destroys-humanity-2024-3 |access-date=2024-06-19 |website=Business Insider |language=en-US}}

|Businessman and CEO of X, Tesla, and SpaceX

{{sortname|Vitalik|Buterin}}

|{{circa|10%|sortable=yes}}{{Cite tweet|last=Buterin|first=Vitalik|user=VitalikButerin|number=1729251822391447904|date=2023-11-27|title=AI risk 1: existential risk. My p(doom) is around 0.1.|access-date=2025-05-31}}

|Cofounder of Ethereum

{{sortname|Marc|Andreessen}}

|0%{{Cite magazine |last=Marantz |first=Andrew |date=2024-03-11 |title=Among the A.I. Doomsayers |url=https://www.newyorker.com/magazine/2024/03/18/among-the-ai-doomsayers |access-date=2024-06-19 |magazine=The New Yorker |language=en-US |issn=0028-792X}}

|American businessman

Richard Sutton

|0%{{Cite AV media |url=https://www.youtube.com/watch?v=f9KDMFZqu_Y |title=NUS120 Distinguished Speaker Series {{!}} Professor Richard Sutton |language=en |access-date=2025-06-09 |via=www.youtube.com}}

|Canadian computer scientist and 2025 Turing Award laureate

|{{sortname|Geoffrey|Hinton}}

|data-sort-value="75%"|>50%{{Cite AV media |url=https://www.youtube.com/watch?v=PTF5Up1hMhw&t=2283s |title=Q&A with Geoffrey Hinton |date=2024-06-27 |last=METR (Model Evaluation & Threat Research) |access-date=2025-02-07 |via=YouTube}}

|"Godfather of AI" and 2024 Nobel Prize laureate in Physics

|{{sortname|Yann|LeCun}}

|data-sort-value="0.005%"|<0.01%{{Cite web |author1=Wayne Williams |date=2024-04-07 |title=Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees |url=https://www.techradar.com/pro/top-ai-researcher-says-ai-will-end-humanity-and-we-should-stop-developing-it-now-but-dont-worry-elon-musk-disagrees |access-date=2024-06-19 |website=TechRadar |language=en}}

|Chief AI Scientist at Meta

|{{sortname|Demis|Hassabis}}

|data-sort-value="50%"|>0%{{cite web |date=23 February 2024 |title=Demis Hassabis on Chatbots to AGI {{!}} EP 71 |url=https://www.youtube.com/watch?v=nwUARJeeplA&t=2330s |access-date=8 October 2024 |website=YouTube}}

|Co-founder and CEO of Google DeepMind and Isomorphic Labs and 2024 Nobel Prize laureate in Chemistry

|{{sortname|Shane|Legg}}

|data-sort-value="27.5%"|{{circa|5–50%}}{{cite web |url=https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-with-shane-legg-on-risks-from-ai |access-date=2025-05-23 |date=2011-05-17 |title=Q&A with Shane Legg on risks from AI |work=Less Wrong }}

|Co-founder and Chief AGI Scientist of Google DeepMind

|{{sortname|Nate|Silver}}

|data-sort-value="7.5%"|5–10%{{cite web |url=https://www.natesilver.net/p/its-time-to-come-to-grips-with-ai |access-date=2025-02-03 |date=2025-01-27 |title=It's time to come to grips with AI |work=Silver Bulletin }}

|Statistician, founder of FiveThirtyEight

{{sortname|Lina|Khan}}

|{{circa|15%|sortable=yes}}

|Former chair of the Federal Trade Commission

|{{sortname|Eliezer|Yudkowsky}}

|data-sort-value="97.5%"|>95%

|Founder of the Machine Intelligence Research Institute

{{sortname|Yoshua|Bengio}}

|50%{{Cite news |date=2023-07-14 |title=It started as a dark in-joke. It could also be one of the most important questions facing humanity |url=https://www.abc.net.au/news/2023-07-15/whats-your-pdoom-ai-researchers-worry-catastrophe/102591340 |access-date=2024-06-18 |work=ABC News |language=en-AU}}

|Computer scientist and scientific director of the Montreal Institute for Learning Algorithms

|{{sortname|Emmett|Shear}}

|data-sort-value="27.5%"|5–50%

|Co-founder of Twitch and former interim CEO of OpenAI

Lex Fridman

|10%{{Cite web |date=2025-06-05 |title=transcripts Archives |url=https://lexfridman.com/category/transcripts/ |access-date=2025-06-06 |website=Lex Fridman |language=en-US}}

|American computer scientist and host of Lex Fridman Podcast

|{{sortname|Dario|Amodei}}

|data-sort-value="17.5%"|{{circa|10–25%}}{{Cite news |last=Roose |first=Kevin |date=2023-12-06 |title=Silicon Valley Confronts a Grim New A.I. Metric |url=https://www.nytimes.com/2023/12/06/business/dealbook/silicon-valley-artificial-intelligence.html |access-date=2024-06-17 |work=The New York Times |language=en-US |issn=0362-4331}}{{Cite tweet|last=Shapira|first=Liron|user=liron|number=1710520914444718459|date=2023-10-07|title=Dario Amodei's P(doom) is 10–25%.|access-date=2025-05-31}}

|CEO of Anthropic

{{sortname|Emad|Mostaque}}

|50%{{Cite tweet|last=Mostaque|first=Emad|user=EMostaque|number=1864266899170767105|date=2024-12-04|title=My P(doom) is 50%.|access-date=2025-05-31}}

|Co-founder of Stability AI

{{sortname|Grady|Booch}}

|{{circa|0%|sortable=yes}}

|American software engineer

{{sortname|Casey|Newton}}

|5%

|American technology journalist

{{sortname|Toby|Ord}}

|10%{{Cite news |date=2023-10-08 |title=Is there really a 1 in 6 chance of human extinction this century? |url=https://www.abc.net.au/news/2023-10-08/is-there-really-a-1-in-6-chance-of-human-extinction/102942530 |access-date=2024-09-01 |work=ABC News |language=en-AU}}

|Australian philosopher and author of The Precipice

{{sortname|Paul|Christiano}}

|50%{{Cite web |date=2023-05-03 |title=ChatGPT creator says there's 50% chance AI ends in 'doom' |url=https://www.independent.co.uk/tech/chatgpt-openai-ai-apocalypse-warning-b2331369.html |access-date=2024-06-19 |website=The Independent |language=en}}

|Head of research at the US AI Safety Institute

{{sortname|Zvi|Mowshowitz}}

|60%{{cite web|last=Jaffee|first=Theo|date=2023-08-18|title=Zvi Mowshowitz - Rationality, Writing, Public Policy, and AI|url=https://www.youtube.com/watch?v=ArfVYNR4Uk4&t=3962s|website=YouTube|access-date=2025-02-03}}

|Writer on artificial intelligence, former competitive Magic: The Gathering player

|{{sortname|Dan|Hendrycks}}

|data-sort-value="90%"|>80%

|Director of Center for AI Safety

{{sortname|Roman|Yampolskiy}}

|99.9%{{Cite web |last=Altchek |first=Ana |title=Why this AI researcher thinks there's a 99.9% chance AI wipes us out |url=https://www.businessinsider.com/ai-researcher-roman-yampolskiy-lex-fridman-human-extinction-prediction-2024-6 |access-date=2024-06-18 |website=Business Insider |language=en-US}}

|Latvian computer scientist

|{{sortname|Jan|Leike}}

|data-sort-value="50%"|10–90%{{Cite news |last=Railey |first=Clint |date=2023-07-12 |title=P(doom) is AI's latest apocalypse metric. Here's how to calculate your score |url=https://www.fastcompany.com/90994526/pdoom-explained-how-to-calculate-your-score-on-ai-apocalypse-metric |website=Fast Company}}

|AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI

|{{sortname|Daniel|Kokotajlo|dab=researcher}}

|data-sort-value="75%"|70–80%{{Cite web |last=Kokotajlo |first=Daniel |date=April 3, 2025 |title=2027 Intelligence Explosion Month-by-Month Model |url=https://www.youtube.com/watch?v=htOvH12T7mU |website=youtube.com}}

|AI researcher and founder of AI Futures Project, formerly of OpenAI

{{sortname|Holden|Karnofsky}}

|50%{{Cite web |last=Thomas |first=Sean |date=2024-03-04 |title=Are we ready for P(doom)? |url=https://www.spectator.co.uk/article/are-we-ready-for-pdoom/ |access-date=2025-05-31 |website=The Spectator |language=en-GB}}

|Executive Director of Open Philanthropy

{{sortname|David|Duvenaud}}

|85%{{Cite AV media |url=https://www.youtube.com/watch?v=mb9w7lFIHRM |title=Top AI Professor Has 85% P(Doom) — David Duvenaud, Fmr. Anthropic Safety Team Lead |date=2025-04-18 |last=Doom Debates |access-date=2025-05-31 |via=YouTube}}

|Former Anthropic Safety Team Lead

{{sortname|Andrew|Critch|Center for Applied Rationality}}

|85%{{Cite AV media |url=https://www.youtube.com/watch?v=opIvVzJF8t0 |title=Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow? |date=2024-11-15 |last=Doom Debates |access-date=2025-05-31 |via=YouTube}}

|Founder of the Center for Applied Rationality

|{{sortname|Connor|Leahy}}

|data-sort-value="95%"|90%+{{Cite AV media |url=https://www.youtube.com/watch?v=nUV5-SLdkuQ |title=Connor Leahy on Why Humanity Risks Extinction from AGI |date=2024-11-22 |last=Future of Life Institute |access-date=2025-05-31 |via=YouTube}}

|German-American AI researcher; cofounder of EleutherAI.

|{{sortname|Eli|Lifland}}

|data-sort-value="37.5%"|{{circa|35–40%}}{{Cite web |last=Gooen |first=Ozzie |date=2023-02-04 |title=Eli Lifland, on Navigating the AI Alignment Landscape |url=https://quri.substack.com/p/eli-lifland-on-navigating-the-ai-722 |access-date=2025-05-31 |website=The QURI Medley}}

|Top competitive superforecaster, co-author of AI 2027.

|{{sortname|Max|Tegmark}}

|data-sort-value="95%"|>90%{{Cite web |last=Tegmark |first=Max |date=30 April 2025 |title=My P(doom) Estimate |url=https://x.com/tegmark/status/1917580821101437280 |archive-url=http://web.archive.org/web/20250502114739/https://x.com/tegmark/status/1917580821101437280 |archive-date=2025-05-02 |access-date=2025-05-31 |website=X (formerly Twitter) |language=en}}

|Swedish-American physicist, machine learning researcher, and author, best known for theorising the mathematical universe hypothesis and co-founding the Future of Life Institute.

Sam Altman

|data-sort-value="75%"|>50%{{Cite web |date=21 June 2016 |title=Elon Musk wants to build a robot that does your housework |url=https://money.cnn.com/2016/06/21/technology/elon-musk-robot-artificial-intelligence |access-date=4 June 2025 |website=CNN |quote=At a tech forum last fall Altman said "I think AI will ...most likely lead to the end of the world."}}

|CEO of OpenAI

Paul Crowley

|data-sort-value="90%"|>80%{{Cite news |last=Marantz |first=Andrew |date=2024-03-11 |title=Among the A.I. Doomsayers |url=https://www.newyorker.com/magazine/2024/03/18/among-the-ai-doomsayers |access-date=2025-06-05 |work=The New Yorker |language=en-US |issn=0028-792X}}

|Computer scientist at Anthropic

Criticism

There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom".{{Cite journal|last=King |first=Isaac |date=2024-01-01 |title=Stop talking about p(doom) |url=https://www.lesswrong.com/posts/EwyviSHWrQcvicsry/stop-talking-about-p-doom |language=en | website=LessWrong}}

See also

Notes

{{reflist|group=Note|refs=

Based on an estimated "50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale."

Up from ~20% 2 years prior.

Equivalent to "P(all the oxygen in my room spontaneously moving to a corner thereby suffocating me)".

Within the next 100 years.

"Less likely than an asteroid wiping us out".

}}

References