Machine Intelligence Research Institute
{{short description|Nonprofit organization researching AI safety}}
{{redirect-distinguish|Singularity Institute|Singularity University}}
{{Infobox organization
| image = MIRI logo.svg
| formation = {{start date and age|2000}}
| type = Nonprofit research institute
| tax_id = 58-2565917
| purpose = Research into friendly artificial intelligence and the AI control problem
| location = Berkeley, California, U.S.
| key_people = Eliezer Yudkowsky
| website = {{Official URL}}
| footnotes =
}}
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
History
File:Eliezer Yudkowsky, Stanford 2006 (square crop).jpg at Stanford University in 2006]]
In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI).{{cite news |title=MIRI: Artificial Intelligence: The Danger of Good Intentions - Future of Life Institute |url=https://futureoflife.org/2015/10/11/113/ |work=Future of Life Institute |date=11 October 2015 |access-date=28 August 2018 |archive-date=28 August 2018 |archive-url=https://web.archive.org/web/20180828102343/https://futureoflife.org/2015/10/11/113/ |url-status=live }} However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity, and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field.{{cite magazine |last1=Khatchadourian |first1=Raffi |title=The Doomsday Invention |url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |magazine=The New Yorker |access-date=2018-08-28 |archive-date=2019-04-29 |archive-url=https://web.archive.org/web/20190429183807/https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |url-status=live }}
Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and with funding from Peter Thiel. The San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism".{{cite news |last=Abate |first=Tom |date=2006 |title=Smarter than thou? |url=http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2006/05/12/BUG9IIMG1V197.DTL |newspaper=San Francisco Chronicle |access-date=12 October 2015 |archive-date=11 February 2011 |archive-url=https://web.archive.org/web/20110211154255/http://www.sfgate.com/cgi-bin/article.cgi?f=%2Fc%2Fa%2F2006%2F05%2F12%2FBUG9IIMG1V197.DTL |url-status=live }}{{cite news |last=Abate |first=Tom |date=2007 |title=Public meeting will re-examine future of artificial intelligence |url=http://www.sfgate.com/news/article/Public-meeting-will-re-examine-future-of-2504766.php |newspaper=San Francisco Chronicle |access-date=12 October 2015 |archive-date=14 January 2016 |archive-url=https://web.archive.org/web/20160114083206/http://www.sfgate.com/news/article/Public-meeting-will-re-examine-future-of-2504766.php |url-status=live }} In 2011, its offices were four apartments in downtown Berkeley.{{cite news |last1=Kaste |first1=Martin |title=The Singularity: Humanity's Last Invention? |url=https://www.npr.org/2011/01/11/132840775/The-Singularity-Humanitys-Last-Invention |work=All Things Considered, NPR |date=January 11, 2011 |language=en |access-date=August 28, 2018 |archive-date=August 28, 2018 |archive-url=https://web.archive.org/web/20180828134334/https://www.npr.org/2011/01/11/132840775/The-Singularity-Humanitys-Last-Invention |url-status=live }} In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University,{{cite news |title=Press release: Singularity University Acquires the Singularity Summitt |url=https://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/ |work=Singularity University |date=9 December 2012 |access-date=28 August 2018 |archive-date=27 April 2019 |archive-url=https://web.archive.org/web/20190427112149/https://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/ |url-status=live }} and in the following month took the name "Machine Intelligence Research Institute".{{cite news |title=Press release: We are now the "Machine Intelligence Research Institute" (MIRI) - Machine Intelligence Research Institute |url=https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/ |work=Machine Intelligence Research Institute |date=30 January 2013 |access-date=28 August 2018 |archive-date=23 September 2018 |archive-url=https://web.archive.org/web/20180923081559/https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/ |url-status=live }}
In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations.{{cite book|title=Life 3.0: Being Human in the Age of Artificial Intelligence|last1=Tegmark|first1=Max|date=2017|publisher=Knopf|isbn=978-1-101-94659-6|location=United States}}{{rp|327}}
In 2019, Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI.{{Cite web|url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019|title=Machine Intelligence Research Institute — General Support (2019)|date=2019-03-29|website=Open Philanthropy Project|language=en|access-date=2019-10-08|archive-date=2019-10-08|archive-url=https://web.archive.org/web/20191008133508/https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019|url-status=live}} In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years.{{cite web|title=Machine Intelligence Research Institute — General Support (2020)|date=10 March 2020|url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020|publisher=Open Philanthropy Project|url-status=live|archive-url=https://web.archive.org/web/20200413172954/https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020|archive-date=April 13, 2020}}{{cite web |url=https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ |title=MIRI's largest grant to date! |publisher=MIRI |date=April 27, 2020 |last=Bensinger |first=Rob |access-date=April 27, 2020 |archive-date=April 27, 2020 |archive-url=https://web.archive.org/web/20200427231623/https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ |url-status=live }}
In 2021, Vitalik Buterin donated several million dollars worth of Ethereum to MIRI.{{Cite web |last=Maheshwari |first=Suyash |date=2021-05-13 |title=Ethereum creator Vitalik Buterin donates $1.5 billion in cryptocurrency to India COVID Relief Fund & other charities |url=https://www.msn.com/en-in/money/topstories/ethereum-creator-vitalik-buterin-donates-15-billion-in-cryptocurrency-to-india-covid-relief-fund-other-charities/ar-BB1gG1nr |url-status=dead |archive-url=https://web.archive.org/web/20210824123721/https://www.msn.com/en-in/money/topstories/ethereum-creator-vitalik-buterin-donates-15-billion-in-cryptocurrency-to-india-covid-relief-fund-other-charities/ar-BB1gG1nr |archive-date=2021-08-24 |access-date=2023-01-23 |website=MSN}}
Research and approach
File:Nate Soares giving a talk at Google.gk.jpg in 2016]]
MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly.{{cite news |last1=Waters |first1=Richard |title=Artificial intelligence: machine v man |url=https://www.ft.com/content/abc942cc-5fb3-11e4-8c27-00144feabdc0 |access-date=27 August 2018 |work=Financial Times |date=31 October 2014 |archive-date=27 August 2018 |archive-url=https://web.archive.org/web/20180827110408/https://www.ft.com/content/abc942cc-5fb3-11e4-8c27-00144feabdc0 |url-status=live }}{{cite news|url=https://www.theatlantic.com/technology/archive/2015/01/building-robots-with-better-morals-than-humans/385015/|title=Building Robots With Better Morals Than Humans|last=LaFrance|first=Adrienne|date=2015|newspaper=The Atlantic|access-date=12 October 2015|archive-date=19 August 2015|archive-url=https://web.archive.org/web/20150819210306/http://www.theatlantic.com/technology/archive/2015/01/building-robots-with-better-morals-than-humans/385015/|url-status=live}}{{cite book |last1=Russell |first1=Stuart |author1-link=Stuart J. Russell |last2=Norvig |first2=Peter |author2-link=Peter Norvig |date=2009 |title=Artificial Intelligence: A Modern Approach |publisher=Prentice Hall |isbn=978-0-13-604259-4}}
MIRI researchers advocate early safety work as a precautionary measure.{{cite news |last1=Sathian |first1=Sanjena |title=The Most Important Philosophers of Our Time Reside in Silicon Valley |url=https://www.ozy.com/fast-forward/the-21st-century-philosophers/65230 |website=OZY |date=4 January 2016 |access-date=28 July 2018 |language=en |archive-date=29 July 2018 |archive-url=https://web.archive.org/web/20180729013157/https://www.ozy.com/fast-forward/the-21st-century-philosophers/65230 |url-status=live }} However, MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner". MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.{{cite news |last=Hsu |first=Jeremy |date=2015 |title=Making Sure AI's Rapid Rise Is No Surprise |url=http://blogs.discovermagazine.com/lovesick-cyborg/2015/09/02/making-sure-ais-rapid-rise-is-no-surprise/ |newspaper=Discover |access-date=12 October 2015 |archive-date=12 October 2015 |archive-url=https://web.archive.org/web/20151012072128/http://blogs.discovermagazine.com/lovesick-cyborg/2015/09/02/making-sure-ais-rapid-rise-is-no-surprise/ |url-status=live }}
MIRI aligns itself with the principles and objectives of the effective altruism movement.{{Cite web|url=https://intelligence.org/2015/08/28/ai-and-effective-altruism/|title=AI and Effective Altruism|date=2015-08-28|website=Machine Intelligence Research Institute|language=en-US|access-date=2019-10-08|archive-date=2019-10-08|archive-url=https://web.archive.org/web/20191008133821/https://intelligence.org/2015/08/28/ai-and-effective-altruism/|url-status=live}}
Works by MIRI staff
- {{cite web |last1=Graves |first1=Matthew |title=Why We Should Be Concerned About Artificial Superintelligence |url=https://www.skeptic.com/reading_room/why-we-should-be-concerned-about-artificial-superintelligence/ |website=Skeptic |publisher=The Skeptics Society |access-date=28 July 2018 |date=8 November 2017}}
- {{cite conference |url=http://www.aaai.org/ocs/index.php/WS/AAAIW14/paper/viewFile/8833/8294 |title=Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem |last1=LaVictoire |first1=Patrick |last2=Fallenstein |first2=Benja |last3=Yudkowsky |first3=Eliezer |author-link3=Eliezer Yudkowsky |last4=Bárász |first4=Mihály |last5=Christiano |first5=Paul |last6=Herreshoff |first6=Marcello |date=2014 |publisher=AAAI Publications |book-title=Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop }}
- {{cite journal |last1=Soares |first1=Nate |last2=Levinstein |first2=Benjamin A. |title=Cheating Death in Damascus |journal=Formal Epistemology Workshop |date=2017 |url=https://intelligence.org/files/DeathInDamascus.pdf |access-date=28 July 2018}}
- {{cite conference|url=http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10124/10136 |title=Corrigibility |last1=Soares |first1=Nate |last2=Fallenstein |first2=Benja |last3=Yudkowsky |first3=Eliezer |author-link3=Eliezer Yudkowsky |last4=Armstrong |first4=Stuart |date=2015|publisher=AAAI Publications |book-title=AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015 }}
- {{cite book |last1=Soares |first1=Nate |last2=Fallenstein |first2=Benja |date=2015 |chapter=Aligning Superintelligence with Human Interests: A Technical Research Agenda |chapter-url=https://intelligence.org/files/TechnicalAgenda.pdf |editor1-last=Miller |editor1-first=James |editor2-last=Yampolskiy |editor2-first=Roman |editor3-last=Armstrong |editor3-first=Stuart |display-editors = 3 |editor4-last=Callaghan |editor4-first=Vic |title=The Technological Singularity: Managing the Journey |publisher=Springer }}
- {{cite book |last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |date=2008 |chapter=Artificial Intelligence as a Positive and Negative Factor in Global Risk |chapter-url=https://intelligence.org/files/AIPosNegFactor.pdf |editor1-last=Bostrom |editor1-first=Nick |editor1-link=Nick Bostrom |editor2-last=Ćirković |editor2-first=Milan |title=Global Catastrophic Risks |publisher=Oxford University Press |isbn=978-0199606504}}
- {{cite journal |last1=Taylor |first1=Jessica |title=Quantilizers: A Safer Alternative to Maximizers for Limited Optimization |journal=Workshops at the Thirtieth AAAI Conference on Artificial Intelligence |date=2016 |url=https://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12613}}
- {{cite conference |url=https://intelligence.org/files/ComplexValues.pdf |title=Complex Value Systems in Friendly AI |last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |date=2011 |publisher=Springer |book-title=Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011 |location=Berlin }}
See also
References
{{reflist|30em}}
Further reading
- {{cite journal |last1=Russell |first1=Stuart |last2=Dewey |first2=Daniel |last3=Tegmark |first3=Max |title=Research Priorities for Robust and Beneficial Artificial Intelligence |journal=AI Magazine |date=Winter 2015 |volume=36 |issue=4 |page=6 |doi=10.1609/aimag.v36i4.2577 |arxiv=1602.03506 |bibcode=2016arXiv160203506R }}
External links
- {{official website}}
- {{ProPublicaNonprofitExplorer|582565917}}
{{Existential risk from artificial intelligence|state=expanded}}
{{Effective altruism}}
{{LessWrong}}
Category:Artificial intelligence associations
Category:501(c)(3) organizations
Category:Transhumanist organizations
Category:Existential risk organizations
Category:Existential risk from artificial general intelligence
Category:2000 establishments in California
Category:Organizations established in 2000