Friendly artificial intelligence
{{Short description|AI to benefit humanity}}
{{Use mdy dates|date=October 2023}}
{{Artificial intelligence|Philosophy}}
Friendly artificial intelligence (friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Etymology and usage
File:Eliezer Yudkowsky, Stanford 2006 (square crop).jpg, AI researcher and creator of the term]]
The term was coined by Eliezer Yudkowsky,{{cite book|last1=Tegmark|first1=Max|title=Our Mathematical Universe: My Quest for the Ultimate Nature of Reality|date=2014|isbn=9780307744258|edition=First|chapter=Life, Our Universe and Everything|quote=Its owner may cede control to what Eliezer Yudkowsky terms a "Friendly AI,"...|title-link=Our Mathematical Universe: My Quest for the Ultimate Nature of Reality|publisher=Knopf Doubleday Publishing }} who is best known for popularizing the idea,{{cite book |last1=Russell |first1=Stuart |author1-link=Stuart J. Russell |last2=Norvig |first2=Peter |author2-link=Peter Norvig |date=2009 |title=Artificial Intelligence: A Modern Approach |publisher=Prentice Hall |isbn=978-0-13-604259-4|title-link=Artificial Intelligence: A Modern Approach }}{{cite book |last=Leighton |first=Jonathan |date=2011 |title=The Battle for Compassion: Ethics in an Apathetic Universe |publisher=Algora |isbn=978-0-87586-870-7}} to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:
Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.
"Friendly" is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.{{cite book |last1=Wallach |first1=Wendell |last2=Allen | first2=Colin |date=2009 |title=Moral Machines: Teaching Robots Right from Wrong |publisher=Oxford University Press, Inc. |isbn=978-0-19-537404-9 }}
Risks of unfriendly AI
{{Main|Existential risk from artificial general intelligence}}
The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict.{{cite journal|url=https://www.academia.edu/704751|author=Kevin LaGrandeur|title=The Persistent Peril of the Artificial Slave|journal=Science Fiction Studies|year=2011|volume=38|issue=2|page=232|doi=10.5621/sciefictstud.38.2.0232|access-date=2013-05-06|author-link=Kevin LaGrandeur|archive-date=2023-01-13|archive-url=https://web.archive.org/web/20230113152138/https://www.academia.edu/704751|url-status=live}} By 1942 these themes prompted Isaac Asimov to create the "Three Laws of Robotics"—principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allowing them to come to harm.{{cite book| title=The Rest of the Robots| chapter-url=https://archive.org/details/restofrobots00asim| chapter-url-access=registration| publisher=Doubleday| year=1964| isbn=0-385-09041-2| chapter=Introduction| author=Isaac Asimov}}
In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:
Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'
In 2008, Eliezer Yudkowsky called for the creation of "friendly AI" to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."{{cite book |author=Eliezer Yudkowsky |year=2008 |chapter-url=http://intelligence.org/files/AIPosNegFactor.pdf |chapter=Artificial Intelligence as a Positive and Negative Factor in Global Risk |title=Global Catastrophic Risks |pages=308–345 |editor1=Nick Bostrom |editor2=Milan M. Ćirković |access-date=2013-10-19 |archive-date=2013-10-19 |archive-url=https://web.archive.org/web/20131019182403/http://intelligence.org/files/AIPosNegFactor.pdf |url-status=live }}
Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic "drives", such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.{{cite journal |last=Omohundro |first=S. M. |date=February 2008 |title=The basic AI drives |journal=Artificial General Intelligence |volume=171 |pages=483–492 |citeseerx=10.1.1.393.8356}}{{cite book|last1=Bostrom|first1=Nick|title=Superintelligence: Paths, Dangers, Strategies|date=2014|publisher=Oxford University Press|location=Oxford|isbn=9780199678112|title-link=Superintelligence: Paths, Dangers, Strategies |chapter=Chapter 7: The Superintelligent Will}}
Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.{{cite web | last=Dvorsky | first=George | title=How Skynet Might Emerge From Simple Physics | website=Gizmodo | date=2013-04-26 | url=https://gizmodo.com/how-skynet-might-emerge-from-simple-physics-482402911 | access-date=2021-12-23 | archive-date=2021-10-08 | archive-url=https://web.archive.org/web/20211008105300/https://gizmodo.com/how-skynet-might-emerge-from-simple-physics-482402911 | url-status=live }}{{cite journal | last1 = Wissner-Gross | first1 = A. D. | author-link1 = Alexander Wissner-Gross | last2 = Freer | first2 = C. E. | author-link2 = Cameron Freer | year = 2013 | title = Causal entropic forces | journal = Physical Review Letters | volume = 110 | issue = 16 | page = 168702 | doi = 10.1103/PhysRevLett.110.168702 | pmid = 23679649 | bibcode = 2013PhRvL.110p8702W | doi-access = free | hdl = 1721.1/79750 | hdl-access = free }}
Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.{{cite web|last1=Muehlhauser|first1=Luke|title=AI Risk and the Security Mindset|url=http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/|website=Machine Intelligence Research Institute|access-date=15 July 2014|date=31 Jul 2013|archive-date=19 July 2014|archive-url=https://web.archive.org/web/20140719205835/http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/|url-status=live}}
In 2014, Luke Muehlhauser and Nick Bostrom underlined the need for 'friendly AI';{{Cite journal|last1=Muehlhauser|first1=Luke|last2=Bostrom|first2=Nick|title=Why We Need Friendly AI|date=2013-12-17|journal=Think|volume=13|issue=36|pages=41–47|doi=10.1017/s1477175613000316|s2cid=143657841|issn=1477-1756}} nonetheless, the difficulties in designing a 'friendly' superintelligence, for instance via programming counterfactual moral thinking, are considerable.{{Cite journal|last1=Boyles|first1=Robert James M.|last2=Joaquin|first2=Jeremiah Joven|date=2019-07-23|title=Why friendly AIs won't be that friendly: a friendly reply to Muehlhauser and Bostrom|journal=AI & Society|volume=35|issue=2|pages=505–507|doi=10.1007/s00146-019-00903-0|s2cid=198190745|issn=0951-5666}}{{Cite journal|last=Chan|first=Berman|date=2020-03-04|title=The rise of artificial intelligence and the crisis of moral passivity|journal=AI & Society|volume=35|issue=4|pages=991–993|language=en|doi=10.1007/s00146-020-00953-9|s2cid=212407078|issn=1435-5655|url=https://philpapers.org/rec/CHATRO-56|access-date=2023-01-21|archive-date=2023-02-10|archive-url=https://web.archive.org/web/20230210114013/https://philpapers.org/rec/CHATRO-56|url-status=live}}
Coherent extrapolated volition
Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted".
Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI that humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.{{cite web |url=https://intelligence.org/files/CEV.pdf |title=Coherent Extrapolated Volition |publisher=Singularity Institute for Artificial Intelligence |year=2004 |access-date=2015-09-12 |author=Eliezer Yudkowsky |archive-date=2015-09-30 |archive-url=https://web.archive.org/web/20150930035316/http://intelligence.org/files/CEV.pdf |url-status=live }} The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
Other approaches
{{See also|AI control problem#Alignment|AI safety}}
Steve Omohundro has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.{{cite news|last1=Hendry|first1=Erica R.|title=What Happens When Artificial Intelligence Turns On Us?|url=http://www.smithsonianmag.com/innovation/what-happens-when-artificial-intelligence-turns-us-180949415/|access-date=15 July 2014|work=Smithsonian Magazine|date=21 Jan 2014|archive-date=19 July 2014|archive-url=https://web.archive.org/web/20140719142131/http://www.smithsonianmag.com/innovation/what-happens-when-artificial-intelligence-turns-us-180949415/|url-status=live}}
Seth Baum argues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with AI developers; Baum argues that, in contrast, "existing messages about beneficial AI are not always framed well". Baum advocates for "cooperative relationships, and positive framing of AI researchers" and cautions against characterizing AI researchers as "not want(ing) to pursue beneficial designs".{{Cite journal|last=Baum|first=Seth D.|date=2016-09-28|title=On the promotion of safe and socially beneficial artificial intelligence|journal=AI & Society|volume=32|issue=4|pages=543–551|doi=10.1007/s00146-016-0677-0|s2cid=29012168|issn=0951-5666}}
In his book Human Compatible, AI researcher Stuart J. Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows:{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url=https://archive.org/details/humancompatiblea0000russ |location=United States |publisher=Viking |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322 |url-access=registration }}{{rp|173}}
{{quote|
- The machine's only objective is to maximize the realization of human preferences.
- The machine is initially uncertain about what those preferences are.
- The ultimate source of information about human preferences is human behavior.}}
The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future."{{rp|173}} Similarly, "behavior" includes any choice between options,{{rp|177}} and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.{{rp|201}}
Public policy
James Barrat, author of Our Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about security—something like the International Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology.
John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI are not necessarily eminent, he suggests a model similar to the National Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.{{cite journal|last1=McGinnis|first1=John O.|title=Accelerating AI|journal=Northwestern University Law Review|date=Summer 2010|volume=104|issue=3|pages=1253–1270|url=http://www.law.northwestern.edu/LAWREVIEW/Colloquy/2010/12/|access-date=16 July 2014|archive-date=1 December 2014|archive-url=https://web.archive.org/web/20141201201600/http://www.law.northwestern.edu/LAWREVIEW/Colloquy/2010/12/|url-status=live}}
Criticism
{{See also|Technological singularity#Criticisms}}
Some critics believe that both human-level AI and superintelligence are unlikely and that, therefore, friendly AI is unlikely. Writing in The Guardian, Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|access-date=17 September 2014|work=The Guardian|date=9 August 2014|archive-date=17 September 2014|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|url-status=live}} Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser and Nick Bostrom’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that human beings would have had. In an article in AI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent.
Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.{{cite journal | last=Kornai | first=András | title=Bounding the impact of AGI | journal=Journal of Experimental & Theoretical Artificial Intelligence | publisher=Informa UK Limited | volume=26 | issue=3 | date=2014-05-15 | issn=0952-813X | doi=10.1080/0952813x.2014.895109 | pages=417–438 | s2cid=7067517 |quote=...the essence of AGIs is their reasoning facilities, and it is the very logic of their being that will compel them to behave in a moral fashion... The real nightmare scenario (is one where) humans find it advantageous to strongly couple themselves to AGIs, with no guarantees against self-deception.}} Other critics question whether artificial intelligence can be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible ever to guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes but certainty and consensus on how one values the different outcomes.{{cite magazine |url=http://www.thenewatlantis.com/publications/the-problem-with-friendly-artificial-intelligence |first1=Adam |last1=Keiper |first2=Ari N. |last2=Schulman |title=The Problem with 'Friendly' Artificial Intelligence |journal=The New Atlantis |number=32 |date=Summer 2011 |page= |pages=80–89 |access-date=2012-01-16 |archive-date=2012-01-15 |archive-url=https://web.archive.org/web/20120115062805/http://www.thenewatlantis.com/publications/the-problem-with-friendly-artificial-intelligence |url-status=live }}
The inner workings of advanced AI systems may be complex and difficult to interpret, leading to concerns about transparency and accountability.{{Cite book |last=Norvig |first=Peter |title=Artificial Intelligence: A Modern Approach |last2=Russell |first2=Stuart |publisher=Pearson |year=2010 |isbn=978-0136042594 |edition=3rd}}
See also
{{div col|colwidth=30em}}
- Affective computing
- AI alignment
- AI effect
- AI takeover
- Ambient intelligence
- Applications of artificial intelligence
- Artificial intelligence arms race
- Artificial intelligence systems integration
- Autonomous agent
- Embodied agent
- Emotion recognition
- Existential risk from artificial general intelligence
- Hallucination (artificial intelligence)
- Hybrid intelligent system
- Intelligence explosion
- Intelligent agent
- Intelligent control
- Machine ethics
- Machine Intelligence Research Institute
- OpenAI
- Regulation of algorithms
- Roko's basilisk
- Sentiment analysis
- Singularitarianism – a moral philosophy advocated by proponents of Friendly AI
- Suffering risks
- Technological singularity
- Three Laws of Robotics
{{div col end}}
References
{{Reflist|30em}}
Further reading
- Yudkowsky, E. (2008). [http://intelligence.org/files/AIPosNegFactor.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk]. In Global Catastrophic Risks, Oxford University Press.
Discusses Artificial Intelligence from the perspective of Existential risk. In particular, Sections 1-4 give background to the definition of Friendly AI in Section 5. Section 6 gives two classes of mistakes (technical and philosophical) which would both lead to the accidental creation of non-Friendly AIs. Sections 7-13 discuss further related issues. - Omohundro, S. (2008). The Basic AI Drives Appeared in AGI-08 – Proceedings of the First Conference on Artificial General Intelligence.
- Mason, C. (2008). [https://aaai.org/Papers/Workshops/2008/WS-08-07/WS08-07-023.pdf Human-Level AI Requires Compassionate Intelligence] {{Webarchive|url=https://web.archive.org/web/20220109170511/https://aaai.org/Papers/Workshops/2008/WS-08-07/WS08-07-023.pdf |date=2022-01-09 }} Appears in AAAI 2008 Workshop on Meta-Reasoning: Thinking About Thinking.
- Froding, B. and Peterson, M. (2021). [https://link.springer.com/article/10.1007/s10676-020-09556-w Friendly AI] Ethics and Information Technology, Vol. 23, pp. 207–214.
External links
- [https://nickbostrom.com/ethics/ai Ethical Issues in Advanced Artificial Intelligence] by Nick Bostrom
- [https://intelligence.org/ie-faq/#WhatIsFriendlyAI What is Friendly AI?] — A brief description of Friendly AI by the Machine Intelligence Research Institute.
- [https://intelligence.org/files/CFAI.pdf Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures] — A near book-length description from the MIRI
- [http://www.ssec.wisc.edu/~billh/g/SIAI_critique.html Critique of the MIRI Guidelines on Friendly AI] — by Bill Hibbard
- [http://www.optimal.org/peter/siai_guidelines.htm Commentary on MIRI's Guidelines on Friendly AI] — by Peter Voss.
- [https://www.thenewatlantis.com/publications/the-problem-with-friendly-artificial-intelligence The Problem with ‘Friendly’ Artificial Intelligence] — On the motives for and impossibility of FAI; by Adam Keiper and Ari N. Schulman.
{{Existential risk from artificial intelligence}}
{{DEFAULTSORT:Friendly Artificial Intelligence}}