:Nick Bostrom

{{Short description|Philosopher and writer (born 1973)}}

{{Use dmy dates|date=November 2021}}

{{Infobox philosopher

| region = Western philosophy

| era = Contemporary philosophy

| name = Nick Bostrom

| image = Prof Nick Bostrom 324-1.jpg

| caption = Bostrom in 2020

| birth_name = Niklas Boström

| birth_date = {{birth date and age|df=yes|1973|3|10}}

| birth_place = Helsingborg, Sweden

| education = {{unbulleted list|

University of Gothenburg (BA)|

Stockholm University (MA)|

King's College London (MSc)|

London School of Economics (PhD)}}

| institutions = Yale University
University of Oxford
Future of Humanity Institute

| thesis_title = Observational Selection Effects and Probability

| thesis_url = http://etheses.lse.ac.uk/2642/

| awards =

| spouse = Susan

| notable_ideas = Anthropic bias
Reversal test
Simulation hypothesis
Existential risk studies
Singleton
Ancestor simulation
Information hazard
Infinitarian paralysis{{cite web|url=https://nickbostrom.com/ethics/infinite.pdf|title=Infinite Ethics|website=nickbostrom.com|access-date=21 February 2019}}
Self-indication assumption
Self-sampling assumption

| school_tradition = Analytic philosophy{{cite magazine|magazine=The New Yorker |first=Raffi |last=Khatchadourian |title=The Doomsday Invention |url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|pages=64–79|date=23 November 2015|volume=XCI|number=37|issn=0028-792X}}

| main_interests = Philosophy of artificial intelligence
Bioethics

|influences =

|influenced =

| website = {{URL |https://nickbostrom.com/}}

|thesis_year=2000}}{{Transhumanism|People}}

Nick Bostrom ({{IPAc-en|ˈ|b|ɒ|s|t|r|əm}} {{respell|BOST|rəm}}; {{langx|sv|Niklas Boström}} {{IPA|sv|ˈnɪ̌kːlas ˈbûːstrœm|}}; born 10 March 1973){{cite web|url=http://www.nickbostrom.com/cv.html|title=nickbostrom.com|publisher=Nickbostrom.com|access-date=16 October 2014|archive-url=https://web.archive.org/web/20180830174436/https://nickbostrom.com/cv.html|archive-date=30 August 2018|url-status=dead}} is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford{{cite news |last1=Shead |first1=Sam |title=How Britain's oldest universities are trying to protect humanity from risky A.I. |url=https://www.cnbc.com/2020/05/25/oxford-cambridge-ai.html |work=CNBC |access-date=5 June 2023 |language=en |date=25 May 2020}} and is now Principal Researcher at the Macrostrategy Research Initiative.{{Cite web |title=Nick Bostrom's Home Page |url=https://nickbostrom.com/ |access-date=2024-04-19 |website=nickbostrom.com |language=en}}

Bostrom is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002), Superintelligence: Paths, Dangers, Strategies (2014) and Deep Utopia: Life and Meaning in a Solved World (2024).

Bostrom believes that advances in artificial intelligence (AI) may lead to superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". He views this as a major source of opportunities and existential risks.{{Cite web |title=Nick Bostrom on the birth of superintelligence |url=https://bigthink.com/series/the-big-think-interview/superintelligence/ |access-date=2023-08-14 |website=Big Think |language=en-US}}

Early life and education

Born as Niklas Boström in 1973 in Helsingborg, Sweden,{{cite news|last1=Thornhill|first1=John|title=Artificial intelligence: can we control it?|url=http://www.ft.com/cms/s/0/46d12e7c-4948-11e6-b387-64ab0a67014c.html |archive-url=https://ghostarchive.org/archive/20221210/http://www.ft.com/cms/s/0/46d12e7c-4948-11e6-b387-64ab0a67014c.html |archive-date=10 December 2022 |url-access=subscription |url-status=live|access-date=10 August 2016|work=Financial Times|date=14 July 2016}} {{subscription required}} he disliked school at a young age and spent his last year of high school learning from home. He was interested in a wide variety of academic areas, including anthropology, art, literature, and science.

He received a B.A. degree from the University of Gothenburg in 1994.{{Cite web|last=Bostrom|first=Nick|title=CV|url=https://www.nickbostrom.com/cv.pdf}} He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. He also did some turns on London's stand-up comedy circuit. In 2000, he was awarded a PhD degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and probability.{{cite thesis |last=Bostrom|first=Nick |date=2000 |title=Observational selection effects and probability|type=PhD|publisher=London School of Economics and Political Science |url=http://etheses.lse.ac.uk/2642/ |access-date=25 June 2021}} He held a teaching position at Yale University from 2000 to 2002, and was a British Academy Postdoctoral Fellow at the University of Oxford from 2002 to 2005.{{cite web |date=8 September 2014 |title=Nick Bostrom on artificial intelligence |url=http://blog.oup.com/2014/09/interview-nick-bostrom-superintelligence/ |access-date=4 March 2015 |publisher=Oxford University Press}}

Research and writing

= Existential risk =

Bostrom's research concerns the future of humanity and long-term outcomes.{{cite web |last1=Andersen |first1=Ross |title=Omens |url=http://aeon.co/magazine/philosophy/ross-andersen-human-extinction/ |url-status=dead |archive-url=https://web.archive.org/web/20151018225831/http://aeon.co/magazine/philosophy/ross-andersen-human-extinction/ |archive-date=18 October 2015 |access-date=5 September 2015 |website=Aeon |publisher=}} He discusses existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". Bostrom is mostly concerned about anthropogenic risks, which are risks arising from human activities, particularly from new technologies such as advanced artificial intelligence, molecular nanotechnology, or synthetic biology.{{Cite web |last=Andersen |first=Ross |date=2012-03-06 |title=We're Underestimating the Risk of Human Extinction |url=https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/ |access-date=2023-07-06 |website=The Atlantic |language=en-US}}

In 2005, Bostrom founded the Future of Humanity Institute which, until its shutdown in 2024, researched the far future of human civilization.{{Cite web |last=Maiberg |first=Emanuel |date=2024-04-17 |title=Institute That Pioneered AI 'Existential Risk' Research Shuts Down |url=https://www.404media.co/institute-that-pioneered-ai-existential-risk-research-shuts-down/ |access-date=2024-09-12 |website=404 Media |language=en}}{{Cite web |date=2024-04-17 |title=Future of Humanity Institute |url=https://www.futureofhumanityinstitute.org/ |access-date=2024-04-17 |archive-url=https://web.archive.org/web/20240417000845/https://www.futureofhumanityinstitute.org/ |archive-date=17 April 2024 }} He is also an adviser to the Centre for the Study of Existential Risk.

In the 2008 essay collection, Global Catastrophic Risks, editors Bostrom and Milan M. Ćirković characterize the relationship between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects{{cite journal |first1=Max |last1=Tegmark |author-link=Max Tegmark |first2=Nick |last2=Bostrom |title=Astrophysics: is a doomsday catastrophe likely? |journal=Nature |volume=438 |issue=7069 |page=754 |year=2005 |doi=10.1038/438754a |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/5923/How_Unlikely_is_a_Doomsday_Catastrophe_plus_Supplementary_Materials.pdf |pmid=16341005 |bibcode=2005Natur.438..754T |s2cid=4390013 }}{{dead link|date=May 2025|bot=medic}}{{cbignore|bot=medic}} and the Fermi paradox.{{cite news |last=Overbye |first=Dennis|author-link=Dennis Overbye|title=The Flip Side of Optimism About Life on Other Planets |url=https://www.nytimes.com/2015/08/04/science/space/the-flip-side-of-optimism-about-life-on-other-planets.html|date=3 August 2015 |work=The New York Times |access-date=29 October 2015 }}

== Vulnerable world hypothesis ==

{{Main|Vulnerable world hypothesis}}

In a paper called "The Vulnerable World Hypothesis",{{Cite journal |last=Bostrom |first=Nick |date=November 2019 |title=The Vulnerable World Hypothesis |url=https://nickbostrom.com/papers/vulnerable.pdf |journal=Global Policy|volume=10 |issue=4 |pages=455–476 |doi=10.1111/1758-5899.12718 }} Bostrom suggests that there may be some technologies that destroy human civilization by default{{efn|Bostrom says that the risk can be reduced if society sufficiently exits what he calls a "semi-anarchic default condition", which roughly means limited capabilities for preventive policing and global governance, and having individuals with diverse motivations.{{Cite web |last=Abhijeet |first=Katte |date=2018-12-25 |title=AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom |url=https://analyticsindiamag.com/ai-doomsday-can-be-avoided-if-we-establish-world-government-nick-bostrom/ |website=Analytics India Magazine |language=en}}}} when discovered. Bostrom proposes a framework for classifying and dealing with these vulnerabilities. He also gives counterfactual thought experiments of how such vulnerabilities could have historically occurred, e.g. if nuclear weapons had been easier to develop or had ignited the atmosphere (as Edward Teller had feared).{{Cite web |last=Piper |first=Kelsey |date=2018-11-19 |title=How technological progress is making it likelier than ever that humans will destroy ourselves |url=https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks |access-date=2023-07-05 |website=Vox |language=en}}

= Digital sentience =

Bostrom supports the substrate independence principle, the idea that consciousness can emerge on various types of physical substrates, not only in "carbon-based biological neural networks" like the human brain.{{Cite journal |last=Bostrom |first=Nick |date=2003 |title=Are You Living In a Computer Simulation? |url=https://simulation-argument.com/simulation.pdf |journal=Philosophical Quarterly|volume=53 |issue=211 |pages=243–255 |doi=10.1111/1467-9213.00309 }} He considers that "sentience is a matter of degree"{{Cite news |last=Jackson |first=Lauren |date=2023-04-12 |title=What if A.I. Sentience Is a Question of Degree? |language=en-US |work=The New York Times |url=https://www.nytimes.com/2023/04/12/world/artificial-intelligence-nick-bostrom.html |access-date=2023-07-05 |issn=0362-4331}} and that digital minds can in theory be engineered to have a much higher rate and intensity of subjective experience than humans, using less resources. Such highly sentient machines, which he calls "super-beneficiaries", would be extremely efficient at achieving happiness. He recommends finding "paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive".{{Cite web |last=Fisher |first=Richard |date=13 November 2020 |title=The intelligent monster that you should let eat you |url=https://www.bbc.com/future/article/20201111-philosophy-of-utility-monsters-and-artificial-intelligence |access-date=2023-07-05 |website=BBC |language=en}}

=Anthropic reasoning=

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.{{cite book| last1=Bostrom |first1=Nick|title=Anthropic Bias: Observation Selection Effects in Science and Philosophy |date=2002 |publisher=Routledge |location=New York |isbn=978-0-415-93858-7|pages=44–58|url=http://www.anthropic-principle.com/sites/anthropic-principle.com/files/pdfs/anthropicbias.pdf |access-date=22 July 2014}}

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that an anthropic theory is needed to deal with these. He introduces the self-sampling assumption (SSA) and analyzes the self-indication assumption (SIA), shows how they lead to different conclusions in a number of cases, and identifies how each is affected by paradoxes or counterintuitive implications in certain thought experiments. He argues against SIA and proposes refining SSA into the strong self-sampling assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".{{Cite web |last=Manson |first=Neil |date=2003-02-09 |title=Anthropic Bias: Observation Selection Effects in Science and Philosophy |url=https://ndpr.nd.edu/reviews/anthropic-bias-observation-selection-effects-in-science-and-philosophy/ |website=Notre Dame Philosophical Reviews}}

In later work, he proposed with Milan M. Ćirković and Anders Sandberg the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past. They suggest that events that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.{{Cite web |last=Brannen |first=Peter |date=2018-03-15 |title=Why Earth's History Appears So Miraculous |url=https://www.theatlantic.com/science/archive/2018/03/human-existence-will-look-more-miraculous-the-longer-we-survive/554513/ |access-date=2024-09-12 |website=The Atlantic |language=en}}{{cite web |title=Anthropic Shadow: Observation Selection Effects and Human Extinction Risks |url=http://www.nickbostrom.com/papers/anthropicshadow.pdf |access-date=16 October 2014 |publisher=Nickbostrom.com}}

==Simulation argument==

{{main|Simulation hypothesis}}

Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:{{Cite web |last=Sutter |first=Paul |date=2024-01-31 |title=Could our Universe be a simulation? How would we even tell? |url=https://arstechnica.com/science/2024/01/could-our-universe-be-a-simulation-how-would-we-even-tell/ |access-date=2024-09-12 |website=Ars Technica |language=en-us}}{{cite news|last1=Nesbit|first1=Jeff|title=Proof of the Simulation Argument|url=https://www.usnews.com/news/blogs/at-the-edge/2012/12/17/proof-of-the-simulation-argument|access-date=17 March 2017|work=US News}}

  1. The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
  2. The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
  3. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

=Ethics of human enhancement=

Bostrom is favorably disposed toward "human enhancement", or "self-improvement and human perfectibility through the ethical application of science", as well as a critic of bio-conservative views.

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association{{cite news |first=John |last=Sutherland |title=The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings |newspaper=The Guardian | date=9 May 2006 |url=https://www.theguardian.com/science/2006/may/09/academicexperts.genetics}} (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies,{{Cite web |date=2024 |title=A Philosophical History of Transhumanism |url=https://philosophynow.org/issues/160/A_Philosophical_History_of_Transhumanism |access-date=2024-09-12 |website=Philosophy Now}} although he is no longer involved with either of these organisations.

In 2005, Bostrom published the short story "The Fable of the Dragon-Tyrant" in the Journal of Medical Ethics. A shorter version was published in 2012 in Philosophy Now.{{Cite journal |last=Bostrom |first=Nick |date=2012-06-12|title=The Fable of the Dragon-Tyrant |url=https://philosophynow.org/issues/89/The_Fable_of_the_Dragon-Tyrant |journal=Philosophy Now|language=en|volume=89|pages=6–9}} The fable personifies death as a dragon that demands a tribute of thousands of people every day. The story explores how status quo bias and learned helplessness can prevent people from taking action to defeat aging even when the means to do so are at their disposal. YouTuber CGP Grey created an animated version of the story.{{Cite video |url=https://www.youtube.com/watch?v=cZYNADOHhVY |title=Fable of the Dragon-Tyrant |date=24 April 2018 |type=Video}}

With philosopher Toby Ord, he proposed the reversal test in 2006. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.{{cite journal |first1=Nick |last1=Bostrom |first2=Toby |last2=Ord |title=The reversal test: eliminating status quo bias in applied ethics |journal=Ethics |volume=116 |issue=4 |pages=656–679 |year=2006 |url=http://www.nickbostrom.com/ethics/statusquo.pdf |doi=10.1086/505233|pmid=17039628 |s2cid=12861892 }}

Bostrom's work also considers potential dysgenic effects in human populations but he thinks genetic engineering can provide a solution and that "In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot".{{Cite journal |last=Bostrom |first=Nick |date=2002 |title=Existential Risks - Analyzing Human Extinction Scenarios and Related Hazards |url=https://nickbostrom.com/existential/risks |journal=Journal of Evolution and Technology}}

=Technology strategy=

{{see also|Differential technological development}}

Bostrom has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.

In 2011, Bostrom founded the Oxford Martin Program on the Impacts of Future Technology.{{cite web |title=Professor Nick Bostrom : People |url=http://www.oxfordmartin.ox.ac.uk/people/22 |url-status=dead |archive-url=https://web.archive.org/web/20180915073347/https://www.oxfordmartin.ox.ac.uk/people/22 |archive-date=15 September 2018 |access-date=16 October 2014 |publisher=Oxford Martin School}}

Bostrom's theory of the unilateralist's curse has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.{{cite news|last1=Lewis|first1=Gregory|title=Horsepox synthesis: A case of the unilateralist's curse?|url=https://thebulletin.org/horsepox-synthesis-case-unilateralist%E2%80%99s-curse11523|work=Bulletin of the Atomic Scientists|access-date=26 February 2018|date=2018-02-19|archive-date=25 February 2018|archive-url=https://web.archive.org/web/20180225184709/https://thebulletin.org/horsepox-synthesis-case-unilateralist%E2%80%99s-curse11523|url-status=dead}}

=Books=

== ''Superintelligence: Paths, Dangers, Strategies'' ==

In 2014, Bostrom published Superintelligence: Paths, Dangers, Strategies, which became a New York Times Best Seller.{{cite news |last1= |first1= |date=2014-09-08 |title=Best Selling Science Books |url=https://www.nytimes.com/2014/09/09/science/best-selling-science-books.html |access-date=19 February 2015 |newspaper=The New York Times}}

The book argues that superintelligence is possible and explores different types of superintelligences, their cognition, the associated risks. He also presents technical and strategic considerations on how to make it safe.

=== Characteristics of a superintelligence ===

Bostrom explores multiple possible paths to superintelligence, including whole brain emulation and human intelligence enhancement, but focuses on artificial general intelligence, explaining that electronic devices have many advantages over biological brains.{{Cite book |last=Bostrom |first=Nick |title=Superintelligence |date=2016 |publisher=Oxford University Press |isbn=978-0-19-873983-8 |pages=98–111 |oclc=943145542}}

Bostrom draws a distinction between final goals and instrumental goals. A final goal is what an agent tries to achieve for its own intrinsic value. Instrumental goals are just intermediary steps towards final goals. Bostrom contends there are instrumental goals that will be shared by most sufficiently intelligent agents because they are generally useful to achieve any objective (e.g. preserving the agent's own existence or current goals, acquiring resources, improving its cognition...), this is the concept of instrumental convergence. On the other side, he writes that virtually any level of intelligence can in theory be combined with virtually any final goal (even absurd final goals, e.g. making paperclips), a concept he calls the orthogonality thesis.{{Cite news |last=Bostrom |first=Nick |date=2014-09-11 |title=You Should Be Terrified of Superintelligent Machines |url=https://slate.com/technology/2014/09/will-artificial-intelligence-turn-on-us-robots-are-nothing-like-humans-and-thats-what-makes-them-so-terrifying.html |access-date=2024-09-13 |work=Slate |language=en-US |issn=1091-2339}}

He argues that an AI with the ability to improve itself might initiate an intelligence explosion, resulting (potentially rapidly) in a superintelligence.{{Cite news |title=Clever cogs |newspaper=The Economist |url=https://www.economist.com/books-and-arts/2014/08/09/clever-cogs |access-date=2023-08-14 |issn=0013-0613}} Such a superintelligence could have vastly superior capabilities, notably in strategizing, social manipulation, hacking or economic productivity. With such capabilities, a superintelligence could outwit humans and take over the world, establishing a singleton (which is "a world order in which there is at the global level a single decision-making agency"{{Efn|Bostrom notes that "the concept of a singleton is an abstract one: a singleton could be democracy, a tyranny, a single dominant AI, a strong set of global norms that include effective provisions for their own enforcement, or even

an alien overlord—its defining characteristic being simply that it is some form of

agency that can solve all major global coordination problems"|name=singleton}}) and optimizing the world according to its final goals.

Bostrom argues that giving simplistic final goals to a superintelligence could be catastrophic:

{{Blockquote|text=Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins.}}

=== Mitigating the risk ===

Bostrom explores several pathways to reduce the existential risk from AI. He emphasizes the importance of international collaboration, notably to reduce race to the bottom and AI arms race dynamics. He suggests potential techniques to help control AI, including containment, stunting AI capabilities or knowledge, narrowing the operating context (e.g. to question-answering), or "tripwires" (diagnostic mechanisms that can lead to a shutdown). But Bostrom contends that "we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out". He thus suggests that in order to be safe for humanity, superintelligence must be aligned with morality or human values so that it is "fundamentally on our side".{{Cite web |last=Bostrom |first=Nick |date=March 2015 |title=What happens when our computers get smarter than we are? |url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are |website=TED}} Potential AI normativity frameworks include Yudkowsky's coherent extrapolated volition (human values improved via extrapolation), moral rightness (doing what is morally right), and moral permissibility (following humanity's coherent extrapolated volition except when it's morally impermissible).

Bostrom warns that an existential catastrophe can also occur from AI being misused by humans for destructive purposes, or from humans failing to take into account the potential moral status of digital minds. Despite these risks, he says that machine superintelligence seems involved at some point in "all the plausible paths to a really great future".

=== Public reception ===

The book became a New York Times Best Seller and received positive feedback from personalities such as Stephen Hawking, Bill Gates, Elon Musk, Peter Singer and Derek Parfit. It was praised for offering clear and compelling arguments on a neglected yet important topic. It was sometimes criticized for spreading pessimism about the potential of AI, or for focusing on longterm and speculative risks.{{Cite magazine |last=Khatchadourian |first=Raffi |date=2015-11-16 |title=The Doomsday Invention |url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |access-date=2023-08-13 |magazine=The New Yorker |language=en-US}} Some skeptics such as Daniel Dennett or Oren Etzioni contended that superintelligence is too far away for the risk to be significant.{{Cite web |title=Is Superintelligence Impossible? |url=https://www.edge.org/conversation/john_brockman-is-superintelligence-impossible |access-date=2023-08-13 |website=Edge}}{{cite web |author=Oren Etzioni |year=2016 |title=No, the Experts Don't Think Superintelligent AI is a Threat to Humanity |url=https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/ |publisher=MIT Technology Review}} Yann LeCun considers that there is no existential risk, asserting that superintelligent AI will have no desire for self-preservation{{Cite web |last=Arul |first=Akashdeep |date=2022-01-27 |title=Yann LeCun sparks a debate on AGI vs human-level AI |url=https://analyticsindiamag.com/yann-lecun-sparks-a-debate-on-agi-vs-human-level-ai/ |access-date=2023-08-14 |website=Analytics India Magazine |language=en-US}} and that experts can be trusted to make it safe.{{Cite web |last=Taylor |first=Chloe |date=15 June 2023 |title=Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now—but 'A.I. godfather' says an existential threat is 'preposterously ridiculous' |url=https://fortune.com/2023/06/15/yann-lecun-ai-godfather-destroy-humanity-threat/ |access-date=2023-08-14 |website=Fortune |language=en}}

Raffi Khatchadourian wrote that Bostrom's book on superintelligence "is not intended as a treatise of deep originality; Bostrom's contribution is to impose the rigors of analytic philosophy on a messy corpus of ideas that emerged at the margins of academic thought."

== ''Deep Utopia: Life and Meaning in a Solved World'' ==

In his 2024 book, Deep Utopia: Life and Meaning in a Solved World, Bostrom explores the concept of an ideal life, if humanity transitions successfully into a post-superintelligence world. Bostrom notes that the question is "not how interesting a future is to look at, but how good it is to live in." He outlines some technologies that he considers physically possible in theory and available at technological maturity, such as cognitive enhancement, reversal of aging, arbitrary sensory inputs (taste, sound...), or the precise control of motivation, mood, well-being and personality. According to him, not only machines would be better than humans at working, but they would also undermine the purpose of many leisure activities, providing extreme welfare while challenging the quest for meaning.{{Cite news |last=Coy |first=Peter |date=2024-04-05 |title=If A.I. Takes All Our Jobs, Will It Also Take Our Purpose? |url=https://www.nytimes.com/2024/04/05/opinion/ai-jobs-nick-bostrom.html |access-date=2024-07-08 |work=The New York Times |language=en-US |issn=0362-4331}}{{Cite book |last=Bostrom |first=Nick |title=Deep utopia: life and meaning in a solved world |date=2024 |publisher=Ideapress Publishing |isbn=978-1-64687-164-3 |location=Washington, DC |chapter=Technological maturity}}

Public engagement

Bostrom has provided policy advice and consulted for many governments and organizations. He gave evidence to the House of Lords, Select Committee on Digital Skills.{{cite web|title=Digital Skills Committee – timeline |url=http://www.parliament.uk/business/committees/committees-a-z/lords-select/digital-skills-committee/timeline/ |website=UK Parliament |access-date=17 March 2017|language=en}} He is an advisory board member for the Machine Intelligence Research Institute,{{cite web|title=Team – Machine Intelligence Research Institute |url=https://intelligence.org/team/#advisors|website=Machine Intelligence Research Institute|access-date=17 March 2017}} Future of Life Institute,{{cite web|title=Team – Future of Life Institute|url=https://futureoflife.org/team/|website=Future of Life Institute|access-date=17 March 2017}} and an external advisor for the Cambridge Centre for the Study of Existential Risk.{{cite magazine|last1=McBain|first1=Sophie|title=Apocalypse Soon: Meet The Scientists Preparing For the End Times |url=https://newrepublic.com/article/119697/scientists-preparing-apocalypse|magazine=New Republic|access-date=17 March 2017 |date=4 October 2014}}

= 1996 email incident =

In January 2023, Bostrom issued an apology{{Cite web |last1=Bostrom |first1=Nick |title=Apology for old email |url=https://nickbostrom.com/oldemail.pdf |access-date=17 January 2024 |website=nickbostrom.com}} for a 1996 listserv email{{Cite web |date=2015-04-27 |title=extropians: Re: Offending People's Minds |url=http://extropians.weidai.com/extropians.96/0441.html |archive-url=https://web.archive.org/web/20150427154738/http://extropians.weidai.com/extropians.96/0441.html |archive-date=27 April 2015 |access-date=2024-06-24}} he sent as a postgrad where he had stated that he thought "Blacks are more stupid than whites", and where he also used the word "niggers" in a description of how he thought this statement might be perceived by others.{{Cite news |last=Ladden-Hall |first=Dan |date=2023-01-12 |title=Top Oxford Philosopher Nick Bostrom Admits Writing 'Disgusting' N-Word Mass Email |url=https://www.thedailybeast.com/nick-bostrom-oxford-philosopher-admits-writing-racist-n-word-email |access-date=2023-01-12 |work=The Daily Beast |language=en |quote=Nick Bostrom posted a note on his website apologizing for the appallingly racist listserv email.}}{{Cite news |last=Robins-Early |first=Nick |date=19 April 2024 |title=Oxford shuts down institute run by Elon Musk-backed philosopher |url=https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes |work=The Guardian |quote=The closure of Bostrom’s center is a further blow to the effective altruism and long-termism movements that the philosopher had spent decades championing, and which in recent years have become mired in scandals related to racism, sexual harassment and financial fraud. Bostrom himself issued an apology last year after a decades-old email surfaced in which he claimed “Blacks are more stupid than whites” and used the N-word.}} The apology, posted on his website, stated that "the invocation of a racial slur was repulsive" and that he "completely repudiate[d] this disgusting email".

The email has been described as "racist" in several news sources.{{Cite news |last=Weinberg |first=Justin |date=13 January 2023 |title=Why a Philosopher's Racist Email from 26 Years Ago is News Today |url=https://dailynous.com/2023/01/13/why-philosophers-racist-email-26-years-ago-news-today/ |work=The Daily Nous |quote=Influential Oxford philosopher Nick Bostrom, well-known for his work on philosophical questions related to ethics, the future, and technology (existential risk, artificial intelligence, simulation), posted an apology for a blatantly racist email he sent to a listserv 26 years ago.}}{{Cite news |last1=Gault |first1=Matthew |last2=Pearson |first2=Jordan |date=12 January 2023 |title=Prominent AI Philosopher and 'Father' of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv |url=https://www.vice.com/en/article/prominent-ai-philosopher-and-father-of-longtermism-sent-very-racist-email-to-a-90s-philosophy-listserv/ |work=Vice |quote=Nick Bostrom, an influential philosopher at the University of Oxford who has been called the “father” of the longtermism movement, has apologized for a racist email he sent in the mid-90s. In the email, Bostrom said that “Blacks are more stupid than whites,” adding, “I like that sentence and think it is true,” and used a racial slur.}} According to Andrew Anthony of The Guardian, "The apology did little to placate Bostrom’s critics, not least because he conspicuously failed to withdraw his central contention regarding race and intelligence, and seemed to make a partial defence of eugenics."{{Cite news |last=Anthony |first=Andrew |date=2024-04-28 |title='Eugenics on steroids': the toxic and contested legacy of Oxford's Future of Humanity Institute |url=https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism |access-date=2024-07-04 |work=The Observer |language=en-GB |issn=0029-7712 |quote=The apology did little to placate Bostrom’s critics, not least because he conspicuously failed to withdraw his central contention regarding race and intelligence, and seemed to make a partial defence of eugenics. Although, after an investigation, Oxford University did accept that Bostrom was not a racist, the whole episode left a stain on the institute’s reputation}}

Shortly afterward, Oxford University condemned the language used in the email and started an investigation.{{Cite news |last=Bilyard |first=Dylan |date=15 January 2023 |title=Investigation Launched into Oxford Don's Racist Email |url=https://theoxfordblue.co.uk/investigation-launched-into-oxford-dons-racist-email/ |work=The Oxford Blue}} The investigation concluded on 10 August 2023: "[W]e do not consider you to be a racist or that you hold racist views, and we consider that the apology you posted in January 2023 was sincere."{{Cite web |last=Bostrom |first=Nick |title=Apology for an Old Email |url=https://nickbostrom.com/oldemail.pdf |website=nickbostrom.com |quote=we do not consider you to be a racist or that you hold racist views, and we consider that the apology you posted in January 2023 was sincere. … we believe that your apology, your acknowledgement of the distress your actions caused, and your appreciation for the care and time that everyone has given to this process has been genuine and sincere. We were also encouraged that you have already embarked on a journey of deep and meaningful reflection, which includes exploring the learning and self-education from this process.}}

Personal life

Bostrom met his wife Susan in 2002. As of 2015, she lived in Montreal and Bostrom in Oxford. They have one son.

Selected works

=Books=

=Journal articles=

  • {{cite journal |first=Nick |last=Bostrom |title=How Long Before Superintelligence? |journal=Journal of Future Studies|volume=2 |year=1998 |url=http://www.nickbostrom.com/superintelligence.html }}
  • {{cite journal |first=Nick |last=Bostrom |title=Observer-relative chances in anthropic reasoning? |journal=Erkenntnis |volume=52 |date=January 2000 |pages=93–108 |url=http://www.anthropic-principle.com/preprints/rel/relative.html |doi=10.1023/A:1005551304409 |jstor=20012969 |issue=1|s2cid=140474848 |author-mask=1|url-access=subscription }}
  • {{cite journal |first=Nick |last=Bostrom |title=The Meta-Newcomb Problem |journal=Analysis |volume=61 |issue=4 |date=October 2001 |pages=309–310 |url=http://www.nickbostrom.com/papers/newcomb.html |doi=10.1111/1467-8284.00310 |jstor=3329010|author-mask=1|url-access=subscription }}
  • {{cite journal |first=Nick |last=Bostrom |title=Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards |journal=Journal of Evolution and Technology |volume=9 |issue=1 |date=March 2002 |url=http://www.nickbostrom.com/existential/risks.html|author-mask=1}}
  • {{cite journal |first=Nick |last=Bostrom |title=Are You Living in a Computer Simulation? |journal=Philosophical Quarterly |volume=53 |issue=211 |date=April 2003 |pages=243–255 |url=http://www.simulation-argument.com/simulation.pdf |doi=10.1111/1467-9213.00309 |jstor=3542867|author-mask=1}}
  • {{cite journal |first=Nick |last=Bostrom |title=The Mysteries of Self-Locating Belief and Anthropic Reasoning |journal=Harvard Review of Philosophy |volume=11 |issue=Spring |year=2003 |pages=59–74 |url=http://anthropic-principle.com/preprints/mys/mysteries.pdf|author-mask=1|doi=10.5840/harvardreview20031114 }}
  • {{cite journal |first=Nick |last=Bostrom |title=Astronomical Waste: The Opportunity Cost of Delayed Technological Development |journal=Utilitas |volume=15 |issue=3 |date=November 2003 |pages=308–314 |url=http://www.nickbostrom.com/astronomical/waste.html |doi=10.1017/S0953820800004076|author-mask=1|citeseerx=10.1.1.429.2849 |s2cid=15860897 }}
  • {{cite journal |first=Nick |last=Bostrom |title=In Defense of Posthuman Dignity |journal=Bioethics |volume=19 |issue=3 |date=June 2005 |pages=202–214 |url=http://www.nickbostrom.com/ethics/dignity.html |doi=10.1111/j.1467-8519.2005.00437.x |pmid=16167401|author-mask=1|url-access=subscription }}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Max |last2=Tegmark |title=How Unlikely is a Doomsday Catastrophe? |journal=Nature |volume=438 |date=December 2005 |pages=754 |arxiv=astro-ph/0512204 |doi=10.1038/438754a |pmid=16341005 |issue=7069|author-mask=with|bibcode=2005Natur.438..754T |s2cid=4390013 }}
  • {{cite journal |first=Nick |last=Bostrom |title=What is a Singleton? |journal=Linguistic and Philosophical Investigations |volume=5 |issue=2 |year=2006 |pages=48–54 |url=http://www.nickbostrom.com/fut/singleton.html|author-mask=1}}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Toby |last2=Ord |title=The Reversal Test: Eliminating Status Quo Bias in Applied Ethics |journal=Ethics |volume=116 |issue=4 |date=July 2006 |pages=656–680 |url=http://www.nickbostrom.com/ethics/statusquo.pdf |doi=10.1086/505233|pmid=17039628 |s2cid=12861892 |author-mask=with}}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Anders |last2=Sandberg |title=Converging Cognitive Enhancements |journal=Annals of the New York Academy of Sciences |volume=1093 |date=December 2006 |pages=201–207 |url=http://www.nickbostrom.com/papers/converging.pdf |doi=10.1196/annals.1382.015 |issue=1|pmid=17312260 |author-mask=with|bibcode=2006NYASA1093..201S |citeseerx=10.1.1.328.3853 |s2cid=10135931 }}
  • {{cite journal |first=Nick |last=Bostrom |title=Drugs can be used to treat more than disease |journal=Nature |volume=452 |issue=7178 |date=January 2008 |pages=520 |url=http://www.nickbostrom.com/letters/drugs.pdf |doi=10.1038/451520b|pmid=18235476 |author-mask=1|bibcode=2008Natur.451..520B |s2cid=4426990 |doi-access=free }}
  • {{cite journal |first=Nick |last=Bostrom |title=The doomsday argument |journal=Think |volume=6 |issue=17–18 |year=2008 |pages=23–28 |doi=10.1017/S1477175600002943|s2cid=171035249 |author-mask=1}}
  • {{cite journal |first=Nick |last=Bostrom |title=Where Are They? Why I hope the search for extraterrestrial life finds nothing |journal=Technology Review |issue=May/June |year=2008 |pages=72–77 |url=http://www.nickbostrom.com/extraterrestrial.pdf|author-mask=1}}
  • {{cite journal |first=Nick |last=Bostrom |title=Letter from Utopia |journal=Studies in Ethics, Law, and Technology|issue=1 |volume=2 |year=2008 |doi=10.2202/1941-6008.1025}}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Anders |last2=Sandberg |title=Cognitive Enhancement: Methods, Ethics, Regulatory Challenges |journal=Science and Engineering Ethics |volume=15 |date=September 2009 |pages=311–341 |url=http://www.nickbostrom.com/cognitive.pdf |doi=10.1007/s11948-009-9142-5 |pmid=19543814 |issue=3|author-mask=with|citeseerx=10.1.1.143.4686 |s2cid=6846531 }}
  • {{cite journal |first=Nick |last=Bostrom |title=Pascal's Mugging |journal=Analysis |volume=69 |issue=3 |year=2009 |pages=443–445 |url=http://www.nickbostrom.com/papers/pascal.pdf |doi=10.1093/analys/anp062 |jstor=40607655|author-mask=1}}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Milan |last2=Ćirković |first3=Anders |last3=Sandberg |title=Anthropic Shadow: Observation Selection Effects and Human Extinction Risks |journal=Risk Analysis |volume=30 |issue=10 |year=2010 |pages=1495–1506 |url=http://www.nickbostrom.com/papers/anthropicshadow.pdf |doi=10.1111/j.1539-6924.2010.01460.x|pmid=20626690 |bibcode=2010RiskA..30.1495C |s2cid=6485564 |author-mask=with}}
  • {{cite journal |first=Nick |last=Bostrom |title=Information Hazards: A Typology of Potential Harms from Knowledge |journal=Review of Contemporary Philosophy |volume=10 |year=2011 |pages=44–79 |url=http://www.nickbostrom.com/information-hazards.pdf|author-mask=1 |id={{ProQuest|920893069}}}}
  • {{cite journal |first=Nick |last=Bostrom |title=THE ETHICS OF ARTIFICIAL INTELLIGENCE |journal=Cambridge Handbook of Artificial Intelligence |year=2011 |url=http://www.nickbostrom.com/ethics/artificial-intelligence.pdf |access-date=13 February 2017 |archive-url=https://web.archive.org/web/20160304015020/http://www.nickbostrom.com/ethics/artificial-intelligence.pdf |archive-date=4 March 2016 |url-status=dead }}
  • {{cite journal |first=Nick |last=Bostrom |title=Infinite Ethics |journal=Analysis and Metaphysics |volume=10 |year=2011 |pages=9–59 |url=http://www.nickbostrom.com/ethics/infinite.pdf}}
  • {{cite journal |first=Nick |last=Bostrom |title=The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents |journal=Minds and Machines |volume=22 |date=May 2012 |pages=71–84 |url=http://www.nickbostrom.com/superintelligentwill.pdf |doi=10.1007/s11023-012-9281-3 |issue=2|s2cid=7445963 |author-mask=1}}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Stuart |last2=Armstrong |first3=Anders |last3=Sandberg |title=Thinking Inside the Box: Controlling and Using Oracle AI |journal=Minds and Machines |volume=22 |issue=4 |date=November 2012 |pages=299–324 |url=http://www.nickbostrom.com/papers/oracle.pdf |doi=10.1007/s11023-012-9282-2|author-mask=with|citeseerx=10.1.1.396.799 |s2cid=9464769 }}
  • {{cite journal |first=Nick |last=Bostrom |title=Existential Risk Reduction as Global Priority |journal=Global Policy |volume=4 |issue=3 |date=February 2013 |pages=15–31 |url=http://www.existential-risk.org/concept.html |doi=10.1111/1758-5899.12002|author-mask=1|url-access=subscription }}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Carl |last2=Shulman |title=Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? |journal=Global Policy |volume=5 |issue=1 |date=February 2014 |pages=85–92 |url=http://www.nickbostrom.com/papers/embryo.pdf |doi=10.1111/1758-5899.12123|author-mask=with|citeseerx=10.1.1.428.8837 }}
  • {{cite journal |first1=Nick |last1=Bostrom |first2=Luke |last2=Muehlhauser |title=Why we need friendly AI |journal=Think |volume=13|issue=36 |year=2014 |pages=41–47 |url=http://www.nickbostrom.com/views/whyfriendlyai.pdf |doi=10.1017/S1477175613000316|s2cid=143657841 |author-mask=with}}
  • {{Cite journal|last=Bostrom|first=Nick|date=September 2019|title=The Vulnerable World Hypothesis|journal=Global Policy |volume=10|issue=4|pages=455–476|doi=10.1111/1758-5899.12718|doi-access=free}}

See also

Notes

{{Notelist}}

References

{{Reflist}}