AI safety#Adversarial robustness
{{Short description|Research area on making AI safe and beneficial}}
{{Artificial intelligence}}AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.
Beyond technical research, AI safety involves developing norms and policies that promote safety. It gained significant popularity in 2023, with rapid progress in generative AI and public concerns voiced by researchers and CEOs about potential dangers. During the 2023 AI Safety Summit, the United States and the United Kingdom both established their own AI Safety Institute. However, researchers have expressed concern that AI safety measures are not keeping pace with the rapid development of AI capabilities.{{Cite magazine |last=Perrigo |first=Billy |date=2023-11-02 |title=U.K.'s AI Safety Summit Ends With Limited, but Meaningful, Progress |url=https://time.com/6330877/uk-ai-safety-summit/ |access-date=2024-06-02 |magazine=Time |language=en}}
Motivations
Scholars discuss current risks from critical systems failures,{{cite thesis |type=PhD |last = De-Arteaga| first = Maria| title = Machine Learning in High-Stakes Settings: Risks and Opportunities| date = 2020-05-13 |publisher=Carnegie Mellon University}} bias,{{Cite journal |last1=Mehrabi |first1=Ninareh |last2=Morstatter |first2=Fred |last3=Saxena |first3=Nripsuta |last4=Lerman |first4=Kristina |last5=Galstyan |first5=Aram |date=2021 |title=A Survey on Bias and Fairness in Machine Learning |url=https://dl.acm.org/doi/10.1145/3457607 |journal=ACM Computing Surveys |language=en |volume=54 |issue=6 |pages=1–35 |doi=10.1145/3457607 |arxiv=1908.09635 |s2cid=201666566 |issn=0360-0300 |access-date=2022-11-28 |archive-date=2022-11-23 |archive-url=https://web.archive.org/web/20221123054208/https://dl.acm.org/doi/10.1145/3457607 |url-status=live }} and AI-enabled surveillance,{{Cite report| publisher = Carnegie Endowment for International Peace|last = Feldstein| first = Steven|authorlink=Steven Feldstein| title = The Global Expansion of AI Surveillance| date = 2019}} as well as emerging risks like technological unemployment, digital manipulation,{{Cite journal| last = Barnes| first = Beth| title = Risks from AI persuasion| journal = Lesswrong| accessdate = 2022-11-23| date = 2021| url = https://www.lesswrong.com/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123055429/https://www.lesswrong.com/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion| url-status = live}} weaponization,{{Cite journal |last1=Brundage |first1=Miles |last2=Avin |first2=Shahar |last3=Clark |first3=Jack |last4=Toner |first4=Helen |last5=Eckersley |first5=Peter |last6=Garfinkel |first6=Ben |last7=Dafoe |first7=Allan |last8=Scharre |first8=Paul |last9=Zeitzoff |first9=Thomas |last10=Filar |first10=Bobby |last11=Anderson |first11=Hyrum |last12=Roff |first12=Heather |last13=Allen |first13=Gregory C |last14=Steinhardt |first14=Jacob |last15=Flynn |first15=Carrick |date=2018-04-30 |others=Apollo-University Of Cambridge Repository, Apollo-University Of Cambridge Repository |title=The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |publisher=Apollo - University of Cambridge Repository |url=https://www.repository.cam.ac.uk/handle/1810/275332 |doi=10.17863/cam.22520 |s2cid=3385567 |access-date=2022-11-28 |archive-date=2022-11-23 |archive-url=https://web.archive.org/web/20221123055429/https://www.repository.cam.ac.uk/handle/1810/275332 |url-status=live }} AI-enabled cyberattacks{{Cite web |last=Davies |first=Pascale |date=December 26, 2022 |title=How NATO is preparing for a new era of AI cyber attacks |url=https://www.euronews.com/next/2022/12/26/ai-cyber-attacks-are-a-critical-threat-this-is-how-nato-is-countering-them |access-date=2024-03-23 |website=euronews |language=en}} and bioterrorism.{{Cite web |last=Ahuja |first=Anjana |date=February 7, 2024 |title=AI's bioterrorism potential should not be ruled out |url=https://www.ft.com/content/e2a28b73-9831-4e7e-be7c-a599d2498f24 |access-date=2024-03-23 |website=Financial Times}} They also discuss speculative risks from losing control of future artificial general intelligence (AGI) agents,{{Cite journal |last=Carlsmith |first=Joseph |date=2022-06-16 |title=Is Power-Seeking AI an Existential Risk? |arxiv=2206.13353}} or from AI enabling perpetually stable dictatorships.{{Cite web |last=Minardi |first=Di |date=16 October 2020 |title=The grim fate that could be 'worse than extinction' |url=https://www.bbc.com/future/article/20201014-totalitarian-world-in-chains-artificial-intelligence |access-date=2024-03-23 |website=BBC}}
= Existential safety =
{{See also|Existential risk from artificial general intelligence}}
Some have criticized concerns about AGI, such as Andrew Ng who compared them in 2015 to "worrying about overpopulation on Mars when we have not even set foot on the planet yet".{{Cite web |date=2023-04-04 |title=AGI Expert Peter Voss Says AI Alignment Problem is Bogus {{!}} NextBigFuture.com |url=https://www.nextbigfuture.com/2023/04/agi-expert-peter-voss-says-ai-alignment-problem-is-bogus.html |access-date=2023-07-23 |language=en-US}} Stuart J. Russell on the other side urges caution, arguing that "it is better to anticipate human ingenuity than to underestimate it".{{Cite web| last = Dafoe| first = Allan| title = Yes, We Are Worried About the Existential Risk of Artificial Intelligence| work = MIT Technology Review| accessdate = 2022-11-28| date = 2016| url = https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/| archive-date = 2022-11-28| archive-url = https://web.archive.org/web/20221128223713/https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/| url-status = live}}
AI researchers have widely differing opinions about the severity and primary sources of risk posed by AI technology{{Cite journal |last1=Grace |first1=Katja |last2=Salvatier |first2=John |last3=Dafoe |first3=Allan |last4=Zhang |first4=Baobao |last5=Evans |first5=Owain |date=2018-07-31 |title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts |url=https://jair.org/index.php/jair/article/view/11222/26431 |url-status=live |journal=Journal of Artificial Intelligence Research |volume=62 |pages=729–754 |arxiv=1705.08807 |doi=10.1613/jair.1.11222 |issn=1076-9757 |s2cid=8746462 |archive-url=https://web.archive.org/web/20230210114220/https://jair.org/index.php/jair/article/view/11222 |archive-date=2023-02-10 |access-date=2022-11-28 |doi-access=free}}{{Cite journal |last1=Zhang |first1=Baobao |last2=Anderljung |first2=Markus |last3=Kahn |first3=Lauren |last4=Dreksler |first4=Noemi |last5=Horowitz |first5=Michael C. |last6=Dafoe |first6=Allan |date=2021-05-05 |title=Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers |journal=Journal of Artificial Intelligence Research |volume=71 |arxiv=2105.02117 |doi=10.1613/jair.1.12895}}{{Cite web |last1=Stein-Perlman |first1=Zach |last2=Weinstein-Raun |first2=Benjamin |last3=Grace |date=2022-08-04 |title=2022 Expert Survey on Progress in AI |url=https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ |url-status=live |archive-url=https://web.archive.org/web/20221123052335/https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ |archive-date=2022-11-23 |accessdate=2022-11-23 |work=AI Impacts}} – though surveys suggest that experts take high consequence risks seriously. In two surveys of AI researchers, the median respondent was optimistic about AI overall, but placed a 5% probability on an "extremely bad (e.g. human extinction)" outcome of advanced AI. In a 2022 survey of the natural language processing community, 37% agreed or weakly agreed that it is plausible that AI decisions could lead to a catastrophe that is "at least as bad as an all-out nuclear war".{{Cite journal |last1=Michael |first1=Julian |last2=Holtzman |first2=Ari |author-link2=Ari Holtzman |last3=Parrish |first3=Alicia |last4=Mueller |first4=Aaron |last5=Wang |first5=Alex |last6=Chen |first6=Angelica |last7=Madaan |first7=Divyam |last8=Nangia |first8=Nikita |last9=Pang |first9=Richard Yuanzhe |last10=Phang |first10=Jason |last11=Bowman |first11=Samuel R. |date=2022-08-26 |title=What Do NLP Researchers Believe? Results of the NLP Community Metasurvey |journal=Association for Computational Linguistics |arxiv=2208.12852}}
History
Risks from AI began to be seriously discussed at the start of the computer age:
{{Blockquote
|text=Moreover, if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes.
}}
In 1988 Blay Whitby published a book outlining the need for AI to be developed along ethical and socially responsible lines.https://sussex.figshare.com/articles/book/Artificial_intelligence_a_handbook_of_professionalism/23312414
From 2008 to 2009, the Association for the Advancement of Artificial Intelligence (AAAI) commissioned a study to explore and address potential long-term societal influences of AI research and development. The panel was generally skeptical of the radical views expressed by science-fiction authors but agreed that "additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes".{{Cite web |last=Association for the Advancement of Artificial Intelligence |title=AAAI Presidential Panel on Long-Term AI Futures |url=https://www.aaai.org/Organization/presidential-panel.php |url-status=live |archive-url=https://web.archive.org/web/20220901033354/https://www.aaai.org/Organization/presidential-panel.php |archive-date=2022-09-01 |accessdate=2022-11-23}}
In 2011, Roman Yampolskiy introduced the term "AI safety engineering"{{Cite journal |last1=Yampolskiy |first1=Roman V. |last2=Spellchecker |first2=M. S. |date=2016-10-25 |title=Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures |arxiv=1610.07997 }} at the Philosophy and Theory of Artificial Intelligence conference,{{Cite web |title=PT-AI 2011 – Philosophy and Theory of Artificial Intelligence (PT-AI 2011) |url=https://conference.researchbib.com/view/event/13986 |url-status=live |archive-url=https://web.archive.org/web/20221123062236/https://conference.researchbib.com/view/event/13986 |archive-date=2022-11-23 |accessdate=2022-11-23}} listing prior failures of AI systems and arguing that "the frequency and seriousness of such events will steadily increase as AIs become more capable".{{Citation |last=Yampolskiy |first=Roman V. |title=Artificial Intelligence Safety Engineering: Why Machine Ethics is a Wrong Approach |date=2013 |url=http://link.springer.com/10.1007/978-3-642-31674-6_29 |work=Philosophy and Theory of Artificial Intelligence |volume=5 |pages=389–396 |editor-last=Müller |editor-first=Vincent C. |access-date=2022-11-23 |archive-url=https://web.archive.org/web/20230315184334/https://link.springer.com/chapter/10.1007/978-3-642-31674-6_29 |url-status=live |series=Studies in Applied Philosophy, Epistemology and Rational Ethics |place=Berlin; Heidelberg, Germany |publisher=Springer Berlin Heidelberg |doi=10.1007/978-3-642-31674-6_29 |isbn=978-3-642-31673-9 |archive-date=2023-03-15}}
In 2014, philosopher Nick Bostrom published the book Superintelligence: Paths, Dangers, Strategies. He has the opinion that the rise of AGI has the potential to create various societal issues, ranging from the displacement of the workforce by AI, manipulation of political and military structures, to even the possibility of human extinction.{{Cite journal |last1=McLean |first1=Scott |last2=Read |first2=Gemma J. M. |last3=Thompson |first3=Jason |last4=Baber |first4=Chris |last5=Stanton |first5=Neville A. |last6=Salmon |first6=Paul M. |date=2023-07-04 |title=The risks associated with Artificial General Intelligence: A systematic review |journal=Journal of Experimental & Theoretical Artificial Intelligence |language=en |volume=35 |issue=5 |pages=649–663 |doi=10.1080/0952813X.2021.1964003 |bibcode=2023JETAI..35..649M |s2cid=238643957 |issn=0952-813X|doi-access=free |hdl=11343/289595 |hdl-access=free }} His argument that future advanced systems may pose a threat to human existence prompted Elon Musk,{{Cite web |last=Wile |first=Rob |date=August 3, 2014 |title=Elon Musk: Artificial Intelligence Is 'Potentially More Dangerous Than Nukes' |url=https://www.businessinsider.com/elon-musk-compares-ai-to-nukes-2014-8 |access-date=2024-02-22 |website=Business Insider |language=en-US}} Bill Gates,{{Cite AV media |url=https://www.youtube.com/watch?v=NG0ZjUfOBUs&t=1055s&ab_channel=KaiserKuo |title=Baidu CEO Robin Li interviews Bill Gates and Elon Musk at the Boao Forum, March 29, 2015 |date=2015-03-31 |last=Kuo |first=Kaiser |time=55:49 |archive-url=https://web.archive.org/web/20221123072346/https://www.youtube.com/watch?v=NG0ZjUfOBUs&t=1055s&ab_channel=KaiserKuo |archive-date=2022-11-23 |url-status=live |accessdate=2022-11-23}} and Stephen Hawking{{Cite news| last = Cellan-Jones| first = Rory|authorlink= Rory Cellan-Jones| title = Stephen Hawking warns artificial intelligence could end mankind| work = BBC News| accessdate = 2022-11-23| date = 2014-12-02| url = https://www.bbc.com/news/technology-30290540| archive-date = 2015-10-30| archive-url = https://web.archive.org/web/20151030054329/http://www.bbc.com/news/technology-30290540| url-status = live}} to voice similar concerns.
In 2015, dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI and outlining concrete directions.{{Cite web| last = Future of Life Institute| title = Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter| work = Future of Life Institute| accessdate = 2022-11-23| url = https://futureoflife.org/open-letter/ai-open-letter/| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123072924/https://futureoflife.org/open-letter/ai-open-letter/| url-status = live}} To date, the letter has been signed by over 8000 people including Yann LeCun, Shane Legg, Yoshua Bengio, and Stuart Russell.
In the same year, a group of academics led by professor Stuart Russell founded the Center for Human-Compatible AI at the University of California Berkeley and the Future of Life Institute awarded $6.5 million in grants for research aimed at "ensuring artificial intelligence (AI) remains safe, ethical and beneficial".{{Cite web| last = Future of Life Institute| title = AI Research Grants Program| work = Future of Life Institute| date = October 2016| accessdate = 2022-11-23| url = https://futureoflife.org/ai-research/| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123074311/https://futureoflife.org/ai-research/| url-status = live}}
In 2016, the White House Office of Science and Technology Policy and Carnegie Mellon University announced The Public Workshop on Safety and Control for Artificial Intelligence,{{Cite web| title = SafArtInt 2016| accessdate = 2022-11-23| url = https://www.cmu.edu/safartint/| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123074311/https://www.cmu.edu/safartint/| url-status = live}} which was one of a sequence of four White House workshops aimed at investigating "the advantages and drawbacks" of AI.{{Cite web| last = Bach| first = Deborah| title = UW to host first of four White House public workshops on artificial intelligence| work = UW News| accessdate = 2022-11-23| date = 2016| url = https://www.washington.edu/news/2016/05/19/uw-to-host-first-of-four-white-house-public-workshops-on-artificial-intelligence/| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123074321/https://www.washington.edu/news/2016/05/19/uw-to-host-first-of-four-white-house-public-workshops-on-artificial-intelligence/| url-status = live}} In the same year, Concrete Problems in AI Safety – one of the first and most influential technical AI Safety agendas – was published.{{Cite journal |last1=Amodei |first1=Dario |last2=Olah |first2=Chris |last3=Steinhardt |first3=Jacob |last4=Christiano |first4=Paul |last5=Schulman |first5=John |last6=Mané |first6=Dan |date=2016-07-25 |title=Concrete Problems in AI Safety |arxiv=1606.06565 }}
In 2017, the Future of Life Institute sponsored the Asilomar Conference on Beneficial AI, where more than 100 thought leaders formulated principles for beneficial AI including "Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards".{{Cite web| last = Future of Life Institute| title = AI Principles| work = Future of Life Institute| accessdate = 2022-11-23| url = https://futureoflife.org/open-letter/ai-principles/| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123074312/https://futureoflife.org/open-letter/ai-principles/| url-status = live}}
In 2018, the DeepMind Safety team outlined AI safety problems in specification, robustness,{{Cite report |url=https://hal.science/hal-04612963 |title=International Scientific Report on the Safety of Advanced AI |last1=Yohsua |first1=Bengio |last2=Daniel |first2=Privitera |date=May 2024 |publisher=Department for Science, Innovation and Technology |last3=Tamay |first3=Besiroglu |last4=Rishi |first4=Bommasani |last5=Stephen |first5=Casper |last6=Yejin |first6=Choi |last7=Danielle |first7=Goldfarb |last8=Hoda |first8=Heidari |last9=Leila |first9=Khalatbari}} and assurance.{{Cite web| last = Research| first = DeepMind Safety| title = Building safe artificial intelligence: specification, robustness, and assurance| work = Medium| accessdate = 2022-11-23| date = 2018-09-27| url = https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1| archive-date = 2023-02-10| archive-url = https://web.archive.org/web/20230210114142/https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1| url-status = live}} The following year, researchers organized a workshop at ICLR that focused on these problem areas.{{Cite web| title = SafeML ICLR 2019 Workshop| accessdate = 2022-11-23| url = https://sites.google.com/view/safeml-iclr2019| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123074310/https://sites.google.com/view/safeml-iclr2019| url-status = live}}
In 2021, Unsolved Problems in ML Safety was published, outlining research directions in robustness, monitoring, alignment, and systemic safety.{{Cite journal |last1=Hendrycks |first1=Dan |last2=Carlini |first2=Nicholas |author2-link = Nicholas Carlini |last3=Schulman |first3=John |last4=Steinhardt |first4=Jacob |date=2022-06-16 |title=Unsolved Problems in ML Safety |arxiv=2109.13916 }}
In 2023, Rishi Sunak said he wants the United Kingdom to be the "geographical home of global AI safety regulation" and to host the first global summit on AI safety.{{Cite web |last=Browne |first=Ryan |date=2023-06-12 |title=British Prime Minister Rishi Sunak pitches UK as home of A.I. safety regulation as London bids to be next Silicon Valley |url=https://www.cnbc.com/2023/06/12/pm-rishi-sunak-pitches-uk-as-geographical-home-of-ai-regulation.html |access-date=2023-06-25 |website=CNBC |language=en}} The AI safety summit took place in November 2023, and focused on the risks of misuse and loss of control associated with frontier AI models.{{Cite news |last=Bertuzzi |first=Luca |date=October 18, 2023 |title=UK's AI safety summit set to highlight risk of losing human control over 'frontier' models |url=https://www.euractiv.com/section/artificial-intelligence/news/uks-ai-safety-summit-set-to-highlight-risk-of-losing-human-control-over-frontier-models/ |access-date=March 2, 2024 |work=Euractiv}} During the summit the intention to create the International Scientific Report on the Safety of Advanced AI{{Cite web |last1=Bengio |first1=Yoshua |last2=Privitera |first2=Daniel |last3=Bommasani |first3=Rishi |last4=Casper |first4=Stephen |last5=Goldfarb |first5=Danielle |last6=Mavroudis |first6=Vasilios |last7=Khalatbari |first7=Leila |last8=Mazeika |first8=Mantas |last9=Hoda |first9=Heidari |date=2024-05-17 |title=International Scientific Report on the Safety of Advanced AI |url=https://assets.publishing.service.gov.uk/media/6655982fdc15efdddf1a842f/international_scientific_report_on_the_safety_of_advanced_ai_interim_report.pdf |url-status=live |archive-url=https://web.archive.org/web/20240615000000/https://assets.publishing.service.gov.uk/media/6655982fdc15efdddf1a842f/international_scientific_report_on_the_safety_of_advanced_ai_interim_report.pdf |archive-date=2024-06-15 |access-date=2024-07-08 |website=GOV.UK}} [https://hal.science/hal-04612963/ Alt URL] was announced.
In 2024, The US and UK forged a new partnership on the science of AI safety. The MoU was signed on 1 April 2024 by US commerce secretary Gina Raimondo and UK technology secretary Michelle Donelan to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November.{{cite news|last=Shepardson |first=David |title=US, Britain announce partnership on AI safety, testing |date=1 April 2024 |url=https://www.reuters.com/technology/us-britain-announce-formal-partnership-artificial-intelligence-safety-2024-04-01/ |access-date=2 April 2024}}
In 2025, an international team of 96 experts chaired by Yoshua Bengio published the first International AI Safety Report. The report, commissioned by 30 nations and the United Nations, represents the first global scientific review of potential risks associated with advanced artificial intelligence. It details potential threats stemming from misuse, malfunction, and societal disruption, with the objective of informing policy through evidence-based findings, without providing specific recommendations.{{Cite news |last= |first= |date=2025-01-29 |title=What International AI Safety report says on jobs, climate, cyberwar and more |url=https://www.theguardian.com/technology/2025/jan/29/what-international-ai-safety-report-says-jobs-climate-cyberwar-deepfakes-extinction |access-date=2025-03-03 |work=The Guardian |language=en-GB |issn=0261-3077}}{{Cite web |date=January 29, 2025 |title=Launch of the First International Report on AI Safety chaired by Yoshua Bengio |url=https://mila.quebec/en/news/launch-of-the-first-international-report-on-ai-safety-chaired-by-yoshua-bengio#:~:text=Wednesday,%20January%2029,%202025%C2%A0%E2%80%93%20First,for%20Science,%20Innovation%20and%20Technology |access-date=2025-03-03 |website=mila.quebec |language=en}}
Research focus
= Robustness =
== Adversarial robustness ==
AI systems are often vulnerable to adversarial examples or "inputs to machine learning (ML) models that an attacker has intentionally designed to cause the model to make a mistake".{{Cite web| last1 = Goodfellow| first1 = Ian| last2 = Papernot| first2 = Nicolas| last3 = Huang| first3 = Sandy| last4 = Duan| first4 = Rocky| last5 = Abbeel| first5 = Pieter| last6 = Clark| first6 = Jack| title = Attacking Machine Learning with Adversarial Examples| work = OpenAI| accessdate = 2022-11-24| date = 2017-02-24| url = https://openai.com/blog/adversarial-example-research/| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124070536/https://openai.com/blog/adversarial-example-research/| url-status = live}} For example, in 2013, Szegedy et al. discovered that adding specific imperceptible perturbations to an image could cause it to be misclassified with high confidence.{{Cite journal |last1=Szegedy |first1=Christian |last2=Zaremba |first2=Wojciech |last3=Sutskever |first3=Ilya |last4=Bruna |first4=Joan |last5=Erhan |first5=Dumitru |last6=Goodfellow |first6=Ian |last7=Fergus |first7=Rob |date=2014-02-19 |title=Intriguing properties of neural networks |journal=ICLR |arxiv=1312.6199}} This continues to be an issue with neural networks, though in recent work the perturbations are generally large enough to be perceptible.{{Cite journal |last1=Kurakin |first1=Alexey |last2=Goodfellow |first2=Ian |last3=Bengio |first3=Samy |date=2017-02-10 |title=Adversarial examples in the physical world |journal=ICLR |arxiv=1607.02533}}{{Cite journal |last1=Madry |first1=Aleksander |last2=Makelov |first2=Aleksandar |last3=Schmidt |first3=Ludwig |last4=Tsipras |first4=Dimitris |last5=Vladu |first5=Adrian |date=2019-09-04 |title=Towards Deep Learning Models Resistant to Adversarial Attacks |journal=ICLR |arxiv=1706.06083}}{{Cite journal |last1=Kannan |first1=Harini |last2=Kurakin |first2=Alexey |last3=Goodfellow |first3=Ian |date=2018-03-16 |title=Adversarial Logit Pairing |arxiv=1803.06373 }}
File:Illustration of imperceptible adversarial pertubation.png
All of the images on the right are predicted to be an ostrich after the perturbation is applied. (Left) is a correctly predicted sample, (center) perturbation applied magnified by 10x, (right) adversarial example.
Adversarial robustness is often associated with security.{{Cite journal |last1=Gilmer |first1=Justin |last2=Adams |first2=Ryan P. |last3=Goodfellow |first3=Ian |last4=Andersen |first4=David |last5=Dahl |first5=George E. |date=2018-07-19 |title=Motivating the Rules of the Game for Adversarial Example Research |arxiv=1807.06732 }} Researchers demonstrated that an audio signal could be imperceptibly modified so that speech-to-text systems transcribe it to any message the attacker chooses.{{Cite journal |last1=Carlini |first1=Nicholas |last2=Wagner |first2=David |date=2018-03-29 |title=Audio Adversarial Examples: Targeted Attacks on Speech-to-Text |journal=IEEE Security and Privacy Workshops |arxiv=1801.01944}} Network intrusion{{Cite journal |last1=Sheatsley |first1=Ryan |last2=Papernot |first2=Nicolas |last3=Weisman |first3=Michael |last4=Verma |first4=Gunjan |last5=McDaniel |first5=Patrick |date=2022-09-09 |title=Adversarial Examples in Constrained Domains |arxiv=2011.01183 }} and malware{{Cite journal |last1=Suciu |first1=Octavian |last2=Coull |first2=Scott E. |last3=Johns |first3=Jeffrey |date=2019-04-13 |title=Exploring Adversarial Examples in Malware Detection |journal=IEEE Security and Privacy Workshops |arxiv=1810.08280}} detection systems also must be adversarially robust since attackers may design their attacks to fool detectors.
Models that represent objectives (reward models) must also be adversarially robust. For example, a reward model might estimate how helpful a text response is and a language model might be trained to maximize this score.{{Cite journal |last1=Ouyang |first1=Long |last2=Wu |first2=Jeff |last3=Jiang |first3=Xu |last4=Almeida |first4=Diogo |last5=Wainwright |first5=Carroll L. |last6=Mishkin |first6=Pamela |last7=Zhang |first7=Chong |last8=Agarwal |first8=Sandhini |last9=Slama |first9=Katarina |last10=Ray |first10=Alex |last11=Schulman |first11=John |last12=Hilton |first12=Jacob |last13=Kelton |first13=Fraser |last14=Miller |first14=Luke |last15=Simens |first15=Maddie |date=2022-03-04 |title=Training language models to follow instructions with human feedback |journal=NeurIPS |arxiv=2203.02155}} Researchers have shown that if a language model is trained for long enough, it will leverage the vulnerabilities of the reward model to achieve a better score and perform worse on the intended task.{{Cite journal |last1=Gao |first1=Leo |last2=Schulman |first2=John |last3=Hilton |first3=Jacob |date=2022-10-19 |title=Scaling Laws for Reward Model Overoptimization |journal=ICML |arxiv=2210.10760}} This issue can be addressed by improving the adversarial robustness of the reward model.{{Cite journal |last1=Yu |first1=Sihyun |last2=Ahn |first2=Sungsoo |last3=Song |first3=Le |last4=Shin |first4=Jinwoo |date=2021-10-27 |title=RoMA: Robust Model Adaptation for Offline Model-based Optimization |journal=NeurIPS |arxiv=2110.14188}} More generally, any AI system used to evaluate another AI system must be adversarially robust. This could include monitoring tools, since they could also potentially be tampered with to produce a higher reward.{{Cite journal |last1=Hendrycks |first1=Dan |last2=Mazeika |first2=Mantas |date=2022-09-20 |title=X-Risk Analysis for AI Research |arxiv=2206.05862 }}
= Monitoring =
== Estimating uncertainty ==
It is often important for human operators to gauge how much they should trust an AI system, especially in high-stakes settings such as medical diagnosis.{{Cite journal |last1=Tran |first1=Khoa A. |last2=Kondrashova |first2=Olga |last3=Bradley |first3=Andrew |last4=Williams |first4=Elizabeth D. |last5=Pearson |first5=John V. |last6=Waddell |first6=Nicola |date=2021 |title=Deep learning in cancer diagnosis, prognosis and treatment selection |journal=Genome Medicine |language=en |volume=13 |issue=1 |pages=152 |doi=10.1186/s13073-021-00968-x |issn=1756-994X |pmc=8477474 |pmid=34579788 |doi-access=free }} ML models generally express confidence by outputting probabilities; however, they are often overconfident,{{Cite conference| publisher = PMLR| volume = 70| pages = 1321–1330| last1 = Guo| first1 = Chuan| last2 = Pleiss| first2 = Geoff| last3 = Sun| first3 = Yu| last4 = Weinberger| first4 = Kilian Q.| title = On calibration of modern neural networks| book-title = Proceedings of the 34th international conference on machine learning| series = Proceedings of machine learning research| date = 2017-08-06}} especially in situations that differ from those that they were trained to handle.{{Cite journal |last1=Ovadia |first1=Yaniv |last2=Fertig |first2=Emily |last3=Ren |first3=Jie |last4=Nado |first4=Zachary |last5=Sculley |first5=D. |last6=Nowozin |first6=Sebastian |last7=Dillon |first7=Joshua V. |last8=Lakshminarayanan |first8=Balaji |last9=Snoek |first9=Jasper |date=2019-12-17 |title=Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift |journal=NeurIPS |arxiv=1906.02530}} Calibration research aims to make model probabilities correspond as closely as possible to the true proportion that the model is correct.
Similarly, anomaly detection or out-of-distribution (OOD) detection aims to identify when an AI system is in an unusual situation. For example, if a sensor on an autonomous vehicle is malfunctioning, or it encounters challenging terrain, it should alert the driver to take control or pull over.{{Cite book |last1=Bogdoll |first1=Daniel |last2=Breitenstein |first2=Jasmin |last3=Heidecker |first3=Florian |last4=Bieshaar |first4=Maarten |last5=Sick |first5=Bernhard |last6=Fingscheidt |first6=Tim |last7=Zöllner |first7=J. Marius |title=2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) |chapter=Description of Corner Cases in Automated Driving: Goals and Challenges |date=2021 |pages=1023–1028 |doi=10.1109/ICCVW54120.2021.00119|arxiv=2109.09607 |isbn=978-1-6654-0191-3 |s2cid=237572375 }} Anomaly detection has been implemented by simply training a classifier to distinguish anomalous and non-anomalous inputs,{{Cite journal |last1=Hendrycks |first1=Dan |last2=Mazeika |first2=Mantas |last3=Dietterich |first3=Thomas |date=2019-01-28 |title=Deep Anomaly Detection with Outlier Exposure |journal=ICLR |arxiv=1812.04606}} though a range of additional techniques are in use.{{Cite journal |last1=Wang |first1=Haoqi |last2=Li |first2=Zhizhong |last3=Feng |first3=Litong |last4=Zhang |first4=Wayne |date=2022-03-21 |title=ViM: Out-Of-Distribution with Virtual-logit Matching |journal=CVPR |arxiv=2203.10807}}{{Cite journal |last1=Hendrycks |first1=Dan |last2=Gimpel |first2=Kevin |date=2018-10-03 |title=A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks |journal=ICLR |arxiv=1610.02136}}
== Detecting malicious use ==
Scholars and government agencies have expressed concerns that AI systems could be used to help malicious actors to build weapons,{{Cite journal |last1=Urbina |first1=Fabio |last2=Lentzos |first2=Filippa |last3=Invernizzi |first3=Cédric |last4=Ekins |first4=Sean |date=2022 |title=Dual use of artificial-intelligence-powered drug discovery |journal=Nature Machine Intelligence |language=en |volume=4 |issue=3 |pages=189–191 |doi=10.1038/s42256-022-00465-9 |issn=2522-5839 |pmc=9544280 |pmid=36211133}} manipulate public opinion,{{Cite journal |last1=Center for Security and Emerging Technology |last2=Buchanan |first2=Ben |last3=Lohn |first3=Andrew |last4=Musser |first4=Micah |last5=Sedova |first5=Katerina |date=2021 |title=Truth, Lies, and Automation: How Language Models Could Change Disinformation |url=https://cset.georgetown.edu/publication/truth-lies-and-automation/ |journal= |doi=10.51593/2021ca003 |s2cid=240522878 |access-date=2022-11-28 |archive-date=2022-11-24 |archive-url=https://web.archive.org/web/20221124073719/https://cset.georgetown.edu/publication/truth-lies-and-automation/ |url-status=live |doi-access=free }}{{Cite web| title = Propaganda-as-a-service may be on the horizon if large language models are abused| work = VentureBeat| accessdate = 2022-11-24| date = 2021-12-14| url = https://venturebeat.com/ai/propaganda-as-a-service-may-be-on-the-horizon-if-large-language-models-are-abused/| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124073718/https://venturebeat.com/ai/propaganda-as-a-service-may-be-on-the-horizon-if-large-language-models-are-abused/| url-status = live}} or automate cyber attacks.{{Cite journal |last1=Center for Security and Emerging Technology |last2=Buchanan |first2=Ben |last3=Bansemer |first3=John |last4=Cary |first4=Dakota |last5=Lucas |first5=Jack |last6=Musser |first6=Micah |date=2020 |title=Automating Cyber Attacks: Hype and Reality |newspaper=Center for Security and Emerging Technology |url=https://cset.georgetown.edu/publication/automating-cyber-attacks/ |doi=10.51593/2020ca002 |s2cid=234623943 |access-date=2022-11-28 |archive-date=2022-11-24 |archive-url=https://web.archive.org/web/20221124074301/https://cset.georgetown.edu/publication/automating-cyber-attacks/ |url-status=live |doi-access=free }} These worries are a practical concern for companies like OpenAI which host powerful AI tools online.{{Cite web| title = Lessons Learned on Language Model Safety and Misuse| work = OpenAI| accessdate = 2022-11-24| date = 2022-03-03| url = https://openai.com/blog/language-model-safety-and-misuse/| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124074259/https://openai.com/blog/language-model-safety-and-misuse/| url-status = live}} In order to prevent misuse, OpenAI has built detection systems that flag or restrict users based on their activity.{{Cite web| last1 = Markov| first1 = Todor| last2 = Zhang| first2 = Chong| last3 = Agarwal| first3 = Sandhini| last4 = Eloundou| first4 = Tyna| last5 = Lee| first5 = Teddy| last6 = Adler| first6 = Steven| last7 = Jiang| first7 = Angela| last8 = Weng| first8 = Lilian| title = New-and-Improved Content Moderation Tooling| work = OpenAI| accessdate = 2022-11-24| date = 2022-08-10| url = https://openai.com/blog/new-and-improved-content-moderation-tooling/| archive-date = 2023-01-11| archive-url = https://web.archive.org/web/20230111020935/https://openai.com/blog/new-and-improved-content-moderation-tooling/| url-status = live}}
== Transparency ==
Neural networks have often been described as black boxes,{{Cite journal| doi = 10.1038/d41586-022-00858-1| last = Savage| first = Neil| title = Breaking into the black box of artificial intelligence| journal = Nature| accessdate = 2022-11-24| date = 2022-03-29| pmid = 35352042| s2cid = 247792459| url = https://www.nature.com/articles/d41586-022-00858-1| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124074724/https://www.nature.com/articles/d41586-022-00858-1| url-status = live}} meaning that it is difficult to understand why they make the decisions they do as a result of the massive number of computations they perform.{{Cite journal |last1=Center for Security and Emerging Technology |last2=Rudner |first2=Tim |last3=Toner |first3=Helen |date=2021 |title=Key Concepts in AI Safety: Interpretability in Machine Learning |url=https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning/ |url-status=live |journal=CSET Issue Brief |doi=10.51593/20190042 |s2cid=233775541 |archive-url=https://web.archive.org/web/20221124075212/https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning/ |archive-date=2022-11-24 |access-date=2022-11-28 |doi-access=free}} This makes it challenging to anticipate failures. In 2018, a self-driving car killed a pedestrian after failing to identify them. Due to the black box nature of the AI software, the reason for the failure remains unclear.{{Cite web| last = McFarland| first = Matt| title = Uber pulls self-driving cars after first fatal crash of autonomous vehicle| work = CNNMoney| accessdate = 2022-11-24| date = 2018-03-19| url = https://money.cnn.com/2018/03/19/technology/uber-autonomous-car-fatal-crash/index.html| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124075209/https://money.cnn.com/2018/03/19/technology/uber-autonomous-car-fatal-crash/index.html| url-status = live}} It also raises debates in healthcare over whether statistically efficient but opaque models should be used.{{Cite journal |last=Felder |first=Ryan Marshall |date=July 2021 |title=Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care |url=https://onlinelibrary.wiley.com/doi/10.1002/hast.1248 |journal=Hastings Center Report |language=en |volume=51 |issue=4 |pages=38–45 |doi=10.1002/hast.1248 |pmid=33821471 |issn=0093-0334}}
One critical benefit of transparency is explainability.{{Cite journal |last1=Doshi-Velez |first1=Finale |last2=Kortz |first2=Mason |last3=Budish |first3=Ryan |last4=Bavitz |first4=Chris |last5=Gershman |first5=Sam |last6=O'Brien |first6=David |last7=Scott |first7=Kate |last8=Schieber |first8=Stuart |last9=Waldo |first9=James |last10=Weinberger |first10=David |last11=Weller |first11=Adrian |last12=Wood |first12=Alexandra |date=2019-12-20 |title=Accountability of AI Under the Law: The Role of Explanation |arxiv=1711.01134 }} It is sometimes a legal requirement to provide an explanation for why a decision was made in order to ensure fairness, for example for automatically filtering job applications or credit score assignment.
Another benefit is to reveal the cause of failures. At the beginning of the 2020 COVID-19 pandemic, researchers used transparency tools to show that medical image classifiers were 'paying attention' to irrelevant hospital labels.{{Cite book |last1=Fong |first1=Ruth |last2=Vedaldi |first2=Andrea |title=2017 IEEE International Conference on Computer Vision (ICCV) |chapter=Interpretable Explanations of Black Boxes by Meaningful Perturbation |date=2017|pages=3449–3457 |doi=10.1109/ICCV.2017.371|arxiv=1704.03296 |isbn=978-1-5386-1032-9 |s2cid=1633753 }}
Transparency techniques can also be used to correct errors. For example, in the paper "Locating and Editing Factual Associations in GPT", the authors were able to identify model parameters that influenced how it answered questions about the location of the Eiffel tower. They were then able to 'edit' this knowledge to make the model respond to questions as if it believed the tower was in Rome instead of France.{{Cite journal| volume = 35| last1 = Meng| first1 = Kevin| last2 = Bau| first2 = David| last3 = Andonian| first3 = Alex| last4 = Belinkov| first4 = Yonatan| title = Locating and editing factual associations in GPT| journal = Advances in Neural Information Processing Systems| date = 2022| arxiv = 2202.05262}} Though in this case, the authors induced an error, these methods could potentially be used to efficiently fix them. Model editing techniques also exist in computer vision.{{Cite journal |last1=Bau |first1=David |last2=Liu |first2=Steven |last3=Wang |first3=Tongzhou |last4=Zhu |first4=Jun-Yan |last5=Torralba |first5=Antonio |date=2020-07-30 |title=Rewriting a Deep Generative Model |journal=ECCV |arxiv=2007.15646}}
Finally, some have argued that the opaqueness of AI systems is a significant source of risk and better understanding of how they function could prevent high-consequence failures in the future.{{Cite journal |last1=Räuker |first1=Tilman |last2=Ho |first2=Anson |last3=Casper |first3=Stephen |last4=Hadfield-Menell |first4=Dylan |date=2022-09-05 |title=Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks |journal=IEEE SaTML |arxiv=2207.13243}} "Inner" interpretability research aims to make ML models less opaque. One goal of this research is to identify what the internal neuron activations represent.{{Cite journal |last1=Bau |first1=David |last2=Zhou |first2=Bolei |last3=Khosla |first3=Aditya |last4=Oliva |first4=Aude |last5=Torralba |first5=Antonio |date=2017-04-19 |title=Network Dissection: Quantifying Interpretability of Deep Visual Representations |journal=CVPR |arxiv=1704.05796}}{{Cite journal |last1=McGrath |first1=Thomas |last2=Kapishnikov |first2=Andrei |last3=Tomašev |first3=Nenad |last4=Pearce |first4=Adam |last5=Wattenberg |first5=Martin |last6=Hassabis |first6=Demis |last7=Kim |first7=Been |last8=Paquet |first8=Ulrich |last9=Kramnik |first9=Vladimir |date=2022-11-22 |title=Acquisition of chess knowledge in AlphaZero |journal=Proceedings of the National Academy of Sciences |language=en |volume=119 |issue=47 |pages=e2206625119 |doi=10.1073/pnas.2206625119 |doi-access=free |pmid=36375061 |pmc=9704706 |arxiv=2111.09259 |bibcode=2022PNAS..11906625M |issn=0027-8424}} For example, researchers identified a neuron in the CLIP artificial intelligence system that responds to images of people in spider man costumes, sketches of spiderman, and the word 'spider'.{{Cite journal| doi = 10.23915/distill.00030| last1 = Goh| first1 = Gabriel| last2 = Cammarata| first2 = Nick| last3 = Voss| first3 = Chelsea| last4 = Carter| first4 = Shan| last5 = Petrov| first5 = Michael| last6 = Schubert| first6 = Ludwig| last7 = Radford| first7 = Alec| last8 = Olah| first8 = Chris| title = Multimodal neurons in artificial neural networks| journal = Distill| date = 2021| volume = 6| issue = 3| s2cid = 233823418| doi-access = free}} It also involves explaining connections between these neurons or 'circuits'.{{Cite journal| doi = 10.23915/distill.00024.001| last1 = Olah| first1 = Chris| last2 = Cammarata| first2 = Nick| last3 = Schubert| first3 = Ludwig| last4 = Goh| first4 = Gabriel| last5 = Petrov| first5 = Michael| last6 = Carter| first6 = Shan| title = Zoom in: An introduction to circuits| journal = Distill| date = 2020| volume = 5| issue = 3| s2cid = 215930358| doi-access = free}}{{Cite journal| doi = 10.23915/distill.00024.006| last1 = Cammarata| first1 = Nick| last2 = Goh| first2 = Gabriel| last3 = Carter| first3 = Shan| last4 = Voss| first4 = Chelsea| last5 = Schubert| first5 = Ludwig| last6 = Olah| first6 = Chris| title = Curve circuits| journal = Distill| date = 2021| volume = 6| issue = 1| doi-broken-date = 1 November 2024| url = https://distill.pub/2020/circuits/curve-circuits/| access-date = 5 December 2022| archive-date = 5 December 2022| archive-url = https://web.archive.org/web/20221205140056/https://distill.pub/2020/circuits/curve-circuits/| url-status = live}} For example, researchers have identified pattern-matching mechanisms in transformer attention that may play a role in how language models learn from their context.{{Cite journal| last1 = Olsson| first1 = Catherine| last2 = Elhage| first2 = Nelson| last3 = Nanda| first3 = Neel| last4 = Joseph| first4 = Nicholas| last5 = DasSarma| first5 = Nova| last6 = Henighan| first6 = Tom| last7 = Mann| first7 = Ben| last8 = Askell| first8 = Amanda| last9 = Bai| first9 = Yuntao| last10 = Chen| first10 = Anna| last11 = Conerly| first11 = Tom| last12 = Drain| first12 = Dawn| last13 = Ganguli| first13 = Deep| last14 = Hatfield-Dodds| first14 = Zac| last15 = Hernandez| first15 = Danny| last16 = Johnston| first16 = Scott| last17 = Jones| first17 = Andy| last18 = Kernion| first18 = Jackson| last19 = Lovitt| first19 = Liane| last20 = Ndousse| first20 = Kamal| last21 = Amodei| first21 = Dario| last22 = Brown| first22 = Tom| last23 = Clark| first23 = Jack| last24 = Kaplan| first24 = Jared| last25 = McCandlish| first25 = Sam| last26 = Olah| first26 = Chris| title = In-context learning and induction heads| journal = Transformer Circuits Thread| date = 2022| arxiv = 2209.11895}} "Inner interpretability" has been compared to neuroscience. In both cases, the goal is to understand what is going on in an intricate system, though ML researchers have the benefit of being able to take perfect measurements and perform arbitrary ablations.{{Cite web| last = Olah| first = Christopher| title = Interpretability vs Neuroscience [rough note]| accessdate = 2022-11-24| url = https://colah.github.io/notes/interp-v-neuro/| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124114744/https://colah.github.io/notes/interp-v-neuro/| url-status = live}}
== Detecting trojans ==
Machine learning models can potentially contain "trojans" or "backdoors": vulnerabilities that malicious actors maliciously build into an AI system. For example, a trojaned facial recognition system could grant access when a specific piece of jewelry is in view; or a trojaned autonomous vehicle may function normally until a specific trigger is visible.{{Cite journal |last1=Gu |first1=Tianyu |last2=Dolan-Gavitt |first2=Brendan |last3=Garg |first3=Siddharth |date=2019-03-11 |title=BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain |arxiv=1708.06733 }} Note that an adversary must have access to the system's training data in order to plant a trojan. {{Citation needed|reason=It is not obvious why this is a necessary condition. Or, consider restricting the scope of "trojan" in this particular sense.|date=March 2024}} This might not be difficult to do with some large models like CLIP or GPT-3 as they are trained on publicly available internet data.{{Cite journal |last1=Chen |first1=Xinyun |last2=Liu |first2=Chang |last3=Li |first3=Bo |last4=Lu |first4=Kimberly |last5=Song |first5=Dawn |date=2017-12-14 |title=Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning |arxiv=1712.05526 }} Researchers were able to plant a trojan in an image classifier by changing just 300 out of 3 million of the training images.{{Cite journal |last1=Carlini |first1=Nicholas |last2=Terzis |first2=Andreas |date=2022-03-28 |title=Poisoning and Backdooring Contrastive Learning |journal=ICLR |arxiv=2106.09667}} In addition to posing a security risk, researchers have argued that trojans provide a concrete setting for testing and developing better monitoring tools.
A 2024 research paper by Anthropic showed that large language models could be trained with persistent backdoors. These "sleeper agent" models could be programmed to generate malicious outputs (such as vulnerable code) after a specific date, while behaving normally beforehand. Standard AI safety measures, such as supervised fine-tuning, reinforcement learning and adversarial training, failed to remove these backdoors.{{Cite news |date=16 January 2024 |title=How 'sleeper agent' AI assistants can sabotage code |url=https://www.theregister.com/2024/01/16/poisoned_ai_models/ |archive-url=http://web.archive.org/web/20241224045421/https://www.theregister.com/2024/01/16/poisoned_ai_models |archive-date=2024-12-24 |access-date=2025-01-12 |work=The Register |language=en}}
= Alignment =
{{Excerpt|AI alignment|templates=Cite,cite,Citation}}
= Systemic safety and sociotechnical factors =
It is common for AI risks (and technological risks more generally) to be categorized as misuse or accidents.{{Cite web| last1 = Zwetsloot| first1 = Remco| last2 = Dafoe| first2 = Allan| title = Thinking About Risks From AI: Accidents, Misuse and Structure| work = Lawfare| access-date = 2022-11-24| date = 2019-02-11| url = https://www.lawfaremedia.org/article/thinking-about-risks-ai-accidents-misuse-and-structure| archive-date = 2023-08-19| archive-url = https://web.archive.org/web/20230819035804/https://www.lawfaremedia.org/article/thinking-about-risks-ai-accidents-misuse-and-structure| url-status = live}} Some scholars have suggested that this framework falls short. For example, the Cuban Missile Crisis was not clearly an accident or a misuse of technology. Policy analysts Zwetsloot and Dafoe wrote, "The misuse and accident perspectives tend to focus only on the last step in a causal chain leading up to a harm: that is, the person who misused the technology, or the system that behaved in unintended ways… Often, though, the relevant causal chain is much longer." Risks often arise from 'structural' or 'systemic' factors such as competitive pressures, diffusion of harms, fast-paced development, high levels of uncertainty, and inadequate safety culture. In the broader context of safety engineering, structural factors like 'organizational safety culture' play a central role in the popular STAMP risk analysis framework.{{Cite journal |last1=Zhang |first1=Yingyu |last2=Dong |first2=Chuntong |last3=Guo |first3=Weiqun |last4=Dai |first4=Jiabao |last5=Zhao |first5=Ziming |date=2022 |title=Systems theoretic accident model and process (STAMP): A literature review |url=https://linkinghub.elsevier.com/retrieve/pii/S0925753521004367 |journal=Safety Science |language=en |volume=152 |pages=105596 |doi=10.1016/j.ssci.2021.105596 |s2cid=244550153 |access-date=2022-11-28 |archive-date=2023-03-15 |archive-url=https://web.archive.org/web/20230315184342/https://www.sciencedirect.com/science/article/abs/pii/S0925753521004367?via%3Dihub |url-status=live }}
Inspired by the structural perspective, some researchers have emphasized the importance of using machine learning to improve sociotechnical safety factors, for example, using ML for cyber defense, improving institutional decision-making, and facilitating cooperation. Others have emphasized the importance of involving both AI practitioners and domain experts in the design process to address structural vulnerabilities.
== Cyber defense ==
Some scholars are concerned that AI will exacerbate the already imbalanced game between cyber attackers and cyber defenders.{{Cite journal |last1=Center for Security and Emerging Technology |last2=Hoffman |first2=Wyatt |date=2021 |title=AI and the Future of Cyber Competition |url=https://cset.georgetown.edu/publication/ai-and-the-future-of-cyber-competition/ |url-status=live |journal=CSET Issue Brief |doi=10.51593/2020ca007 |s2cid=234245812 |archive-url=https://web.archive.org/web/20221124122253/https://cset.georgetown.edu/publication/ai-and-the-future-of-cyber-competition/ |archive-date=2022-11-24 |access-date=2022-11-28 |doi-access=free}} This would increase 'first strike' incentives and could lead to more aggressive and destabilizing attacks. In order to mitigate this risk, some have advocated for an increased emphasis on cyber defense. In addition, software security is essential for preventing powerful AI models from being stolen and misused. Recent studies have shown that AI can significantly enhance both technical and managerial cybersecurity tasks by automating routine tasks and improving overall efficiency.{{Cite journal |last1=Gafni |first1=Ruti |last2=Levy |first2=Yair |date=2024-01-01 |title=The role of artificial intelligence (AI) in improving technical and managerial cybersecurity tasks' efficiency |url=https://doi.org/10.1108/ICS-04-2024-0102 |journal=Information & Computer Security |pages=711–728 |volume=32 |issue=5 |doi=10.1108/ICS-04-2024-0102 |issn=2056-4961}}
== Improving institutional decision-making ==
The advancement of AI in economic and military domains could precipitate unprecedented political challenges.{{Cite journal |last1=Center for Security and Emerging Technology |last2=Imbrie |first2=Andrew |last3=Kania |first3=Elsa |date=2019 |title=AI Safety, Security, and Stability Among Great Powers: Options, Challenges, and Lessons Learned for Pragmatic Engagement |url=https://cset.georgetown.edu/publication/ai-safety-security-and-stability-among-great-powers-options-challenges-and-lessons-learned-for-pragmatic-engagement/ |journal= |doi=10.51593/20190051 |s2cid=240957952 |access-date=2022-11-28 |archive-date=2022-11-24 |archive-url=https://web.archive.org/web/20221124122652/https://cset.georgetown.edu/publication/ai-safety-security-and-stability-among-great-powers-options-challenges-and-lessons-learned-for-pragmatic-engagement/ |url-status=live |doi-access=free }} Some scholars have compared AI race dynamics to the cold war, where the careful judgment of a small number of decision-makers often spelled the difference between stability and catastrophe.{{Cite AV media| people = Future of Life Institute| title = AI Strategy, Policy, and Governance (Allan Dafoe)| accessdate = 2022-11-23| date = 2019-03-27| time = 22:05| url = https://www.youtube.com/watch?v=2IpJ8TIKKtI| archive-date = 2022-11-23| archive-url = https://web.archive.org/web/20221123055429/https://www.youtube.com/watch?v=2IpJ8TIKKtI| url-status = live}} AI researchers have argued that AI technologies could also be used to assist decision-making. For example, researchers are beginning to develop AI forecasting{{Cite journal |last1=Zou |first1=Andy |last2=Xiao |first2=Tristan |last3=Jia |first3=Ryan |last4=Kwon |first4=Joe |last5=Mazeika |first5=Mantas |last6=Li |first6=Richard |last7=Song |first7=Dawn |last8=Steinhardt |first8=Jacob |last9=Evans |first9=Owain |last10=Hendrycks |first10=Dan |date=2022-10-09 |title=Forecasting Future World Events with Neural Networks |journal=NeurIPS |arxiv=2206.15474}} and advisory systems.{{Cite journal |last1=Gathani |first1=Sneha |last2=Hulsebos |first2=Madelon |last3=Gale |first3=James |last4=Haas |first4=Peter J. |last5=Demiralp |first5=Çağatay |date=2022-02-08 |title=Augmenting Decision Making via Interactive What-If Analysis |journal=Conference on Innovative Data Systems Research |arxiv=2109.06160}}
== Facilitating cooperation ==
Many of the largest global threats (nuclear war,{{Citation |last=Lindelauf |first=Roy |title=Nuclear Deterrence in the Algorithmic Age: Game Theory Revisited |date=2021 |work=NL ARMS Netherlands Annual Review of Military Studies 2020 |series=Nl Arms |pages=421–436 |editor-last=Osinga |editor-first=Frans |place=The Hague |publisher=T.M.C. Asser Press |language=en |doi=10.1007/978-94-6265-419-8_22 |isbn=978-94-6265-418-1 |s2cid=229449677 |editor2-last=Sweijs |editor2-first=Tim |doi-access=free }} climate change,{{Cite web| last = Newkirk II| first = Vann R.| title = Is Climate Change a Prisoner's Dilemma or a Stag Hunt?| work = The Atlantic| accessdate = 2022-11-24| date = 2016-04-21| url = https://www.theatlantic.com/politics/archive/2016/04/climate-change-game-theory-models/624253/| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124123011/https://www.theatlantic.com/politics/archive/2016/04/climate-change-game-theory-models/624253/| url-status = live}} etc.) have been framed as cooperation challenges. As in the well-known prisoner's dilemma scenario, some dynamics may lead to poor results for all players, even when they are optimally acting in their self-interest. For example, no single actor has strong incentives to address climate change even though the consequences may be significant if no one intervenes.
A salient AI cooperation challenge is avoiding a 'race to the bottom'.{{Cite report | publisher = Future of Humanity Institute, Oxford University| last1 = Armstrong| first1 = Stuart| last2 = Bostrom| first2 = Nick| last3 = Shulman| first3 = Carl| title = Racing to the Precipice: a Model of Artificial Intelligence Development}} In this scenario, countries or companies race to build more capable AI systems and neglect safety, leading to a catastrophic accident that harms everyone involved. Concerns about scenarios like these have inspired both political{{Cite report | publisher = Centre for the Governance of AI, Future of Humanity Institute, University of Oxford| last = Dafoe| first = Allan| title = AI Governance: A Research Agenda}} and technical{{Cite journal |last1=Dafoe |first1=Allan |last2=Hughes |first2=Edward |last3=Bachrach |first3=Yoram |last4=Collins |first4=Tantum |last5=McKee |first5=Kevin R. |last6=Leibo |first6=Joel Z. |last7=Larson |first7=Kate |last8=Graepel |first8=Thore |date=2020-12-15 |title=Open Problems in Cooperative AI |journal=NeurIPS |arxiv=2012.08630}} efforts to facilitate cooperation between humans, and potentially also between AI systems. Most AI research focuses on designing individual agents to serve isolated functions (often in 'single-player' games).{{Cite journal |last1=Dafoe |first1=Allan |last2=Bachrach |first2=Yoram |last3=Hadfield |first3=Gillian |last4=Horvitz |first4=Eric |last5=Larson |first5=Kate |last6=Graepel |first6=Thore |date=2021 |title=Cooperative AI: machines must learn to find common ground |url=https://www.nature.com/articles/d41586-021-01170-0 |journal=Nature |volume=593 |issue=7857 |pages=33–36 |doi=10.1038/d41586-021-01170-0 |pmid=33947992 |bibcode=2021Natur.593...33D |s2cid=233740521 |accessdate=2022-11-24 |archive-date=2022-11-22 |archive-url=https://web.archive.org/web/20221122230552/https://www.nature.com/articles/d41586-021-01170-0 |url-status=live }} Scholars have suggested that as AI systems become more autonomous, it may become essential to study and shape the way they interact.{{Cite journal |last1=Gazos |first1=Alexandros |last2=Kahn |first2=James |last3=Kusche |first3=Isabel |last4=Büscher |first4=Christian |last5=Götz |first5=Markus |date=2025-04-01 |title=Organising AI for safety: Identifying structural vulnerabilities to guide the design of AI-enhanced socio-technical systems |journal=Safety Science |volume=184 |pages=106731 |doi=10.1016/j.ssci.2024.106731 |issn=0925-7535|doi-access=free }}
== Challenges of large language models ==
In recent years, the development of large language models (LLMs) has raised unique concerns within the field of AI safety. Researchers Bender and Gebru et al.Bender, E.M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. https://doi.org/10.1145/3442188.3445922. have highlighted the environmental and financial costs associated with training these models, emphasizing that the energy consumption and carbon footprint of training procedures like those for Transformer models can be substantial. Moreover, these models often rely on massive, uncurated Internet-based datasets, which can encode hegemonic and biased viewpoints, further marginalizing underrepresented groups. The large-scale training data, while vast, does not guarantee diversity and often reflects the worldviews of privileged demographics, leading to models that perpetuate existing biases and stereotypes. This situation is exacerbated by the tendency of these models to produce seemingly coherent and fluent text, which can mislead users into attributing meaning and intent where none exists, a phenomenon described as 'stochastic parrots'. These models, therefore, pose risks of amplifying societal biases, spreading misinformation, and being used for malicious purposes, such as generating extremist propaganda or deepfakes. To address these challenges, researchers advocate for more careful planning in dataset creation and system development, emphasizing the need for research projects that contribute positively towards an equitable technological ecosystem.Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. arXiv preprint arXiv:1906.02243.Schwartz, R., Dodge, J., Smith, N.A., & Etzioni, O. (2020). Green AI. Communications of the ACM, 63(12), 54-63. https://doi.org/10.1145/3442188.3445922.
The unique challenges posed by LLMs also extend to security vulnerabilities. These include various manipulation techniques, such as prompt injection, Misinformation Generation and model stealing,{{Cite web |title=How To Hack Large Language Models (LLM) |url=https://yourgpt.ai/blog/growth/how-to-hack-large-language-models-llm |url-status=live}} which can be exploited to compromise their intended function. This can allow attackers to bypass safety measures and elicit unintended responses
In governance
File:Vice President Harris at the group photo of the 2023 AI Safety Summit.jpg of November 2023{{Cite news |last1=Satariano |first1=Adam |last2=Specia |first2=Megan |date=2023-11-01 |title=Global Leaders Warn A.I. Could Cause 'Catastrophic' Harm |url=https://www.nytimes.com/2023/11/01/world/europe/uk-ai-summit-sunak.html |access-date=2024-04-20 |work=The New York Times |language=en-US |issn=0362-4331}}]]
AI governance is broadly concerned with creating norms, standards, and regulations to guide the use and development of AI systems.
= Research =
AI safety governance research ranges from foundational investigations into the potential impacts of AI to specific applications. On the foundational side, researchers have argued that AI could transform many aspects of society due to its broad applicability, comparing it to electricity and the steam engine.{{Cite journal |last=Crafts |first=Nicholas |date=2021-09-23 |title=Artificial intelligence as a general-purpose technology: an historical perspective |url=https://academic.oup.com/oxrep/article/37/3/521/6374675 |journal=Oxford Review of Economic Policy |language=en |volume=37 |issue=3 |pages=521–536 |doi=10.1093/oxrep/grab012 |issn=0266-903X |access-date=2022-11-28 |archive-date=2022-11-24 |archive-url=https://web.archive.org/web/20221124130718/https://academic.oup.com/oxrep/article/37/3/521/6374675 |url-status=live |doi-access=free }} Some work has focused on anticipating specific risks that may arise from these impacts – for example, risks from mass unemployment,{{Cite journal |last1=葉俶禎 |last2=黃子君 |last3=張媁雯 |last4=賴志樫 |date=2020-12-01 |title=Labor Displacement in Artificial Intelligence Era: A Systematic Literature Review |journal=臺灣東亞文明研究學刊 |language=en |volume=17 |issue=2 |doi=10.6163/TJEAS.202012_17(2).0002 |issn=1812-6243}} weaponization,{{Cite journal |last=Johnson |first=James |date=2019-04-03 |title=Artificial intelligence & future warfare: implications for international security |url=https://www.tandfonline.com/doi/full/10.1080/14751798.2019.1600800 |journal=Defense & Security Analysis |language=en |volume=35 |issue=2 |pages=147–169 |doi=10.1080/14751798.2019.1600800 |s2cid=159321626 |issn=1475-1798 |access-date=2022-11-28 |archive-date=2022-11-24 |archive-url=https://web.archive.org/web/20221124125204/https://www.tandfonline.com/doi/full/10.1080/14751798.2019.1600800 |url-status=live }} disinformation,{{Cite journal |last=Kertysova |first=Katarina |date=2018-12-12 |title=Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered |journal=Security and Human Rights |volume=29 |issue=1–4 |pages=55–81 |doi=10.1163/18750230-02901005 |s2cid=216896677 |issn=1874-7337 |doi-access=free }} surveillance,{{Cite conference| publisher = Carnegie Endowment for International Peace| last = Feldstein| first = Steven| title = The Global Expansion of AI Surveillance| date = 2019}} and the concentration of power.{{Cite book |last1=Agrawal |first1=Ajay |url=https://www.worldcat.org/oclc/1099435014 |title=The economics of artificial intelligence: an agenda |last2=Gans |first2=Joshua |last3=Goldfarb |first3=Avi |date=2019 |isbn=978-0-226-61347-5 |location=Chicago, Illinois |language=en-us |oclc=1099435014 |access-date=2022-11-28 |archive-url=https://web.archive.org/web/20230315184354/https://www.worldcat.org/title/1099435014 |archive-date=2023-03-15 |url-status=live}} Other work explores underlying risk factors such as the difficulty of monitoring the rapidly evolving AI industry,{{Cite journal |last1=Whittlestone |first1=Jess |last2=Clark |first2=Jack |date=2021-08-31 |title=Why and How Governments Should Monitor AI Development |arxiv=2108.12427 }} the availability of AI models,{{Cite web| last = Shevlane| first = Toby| title = Sharing Powerful AI Models {{!}} GovAI Blog| work = Center for the Governance of AI| accessdate = 2022-11-24| date = 2022| url = https://www.governance.ai/post/sharing-powerful-ai-models| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124125202/https://www.governance.ai/post/sharing-powerful-ai-models| url-status = live}} and 'race to the bottom' dynamics.{{Cite journal |last1=Askell |first1=Amanda |last2=Brundage |first2=Miles |last3=Hadfield |first3=Gillian |date=2019-07-10 |title=The Role of Cooperation in Responsible AI Development |arxiv=1907.04534 }} Allan Dafoe, the head of longterm governance and strategy at DeepMind has emphasized the dangers of racing and the potential need for cooperation: "it may be close to a necessary and sufficient condition for AI safety and alignment that there be a high degree of caution prior to deploying advanced powerful systems; however, if actors are competing in a domain with large returns to first-movers or relative advantage, then they will be pressured to choose a sub-optimal level of caution". A research stream focuses on developing approaches, frameworks, and methods to assess AI accountability, guiding and promoting audits of AI-based systems.{{Citation |last1=Gursoy |first1=Furkan |title=System Cards for AI-Based Decision-Making for Public Policy |date=2022-08-31 |arxiv=2203.04754 |last2=Kakadiaris |first2=Ioannis A.}}{{Cite book |last1=Cobbe |first1=Jennifer |last2=Lee |first2=Michelle Seng Ah |last3=Singh |first3=Jatinder |chapter=Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems |date=2021-03-01 |title=Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency |series=FAccT '21 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=598–609 |doi=10.1145/3442188.3445921 |isbn=978-1-4503-8309-7|doi-access=free }}{{Cite book |last1=Raji |first1=Inioluwa Deborah |last2=Smart |first2=Andrew |last3=White |first3=Rebecca N. |last4=Mitchell |first4=Margaret |last5=Gebru |first5=Timnit |last6=Hutchinson |first6=Ben |last7=Smith-Loud |first7=Jamila |last8=Theron |first8=Daniel |last9=Barnes |first9=Parker |chapter=Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing |date=2020-01-27 |title=Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency |series=FAT* '20 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=33–44 |doi=10.1145/3351095.3372873 |isbn=978-1-4503-6936-7|doi-access=free }} A key challenge for these approaches is a lack of widely-accepted standards, and ambiguity about what the methods would require.{{Cite journal |last=Manheim |first=David |last2=Martin |first2=Sammy |last3=Bailey |first3=Mark |last4=Samin |first4=Mikhail |last5=Greutzmacher |first5=Ross |title=The necessity of AI audit standards boards |journal=AI & Society |year=2025 |doi=10.1007/s00146-025-02320-y |url=https://link.springer.com/article/10.1007/s00146-025-02320-y|arxiv=2404.13060 }}{{Cite journal |last1=Novelli |first1=Claudio |last2=Taddeo |first2=Mariarosaria |last3=Floridi |first3=Luciano |title=Accountability in artificial intelligence: what it is and how it works |journal=AI & Society |volume=39 |issue=4 |pages=1871–1882 |year=2024 |doi=10.1007/s00146-023-01635-y |url=https://link.springer.com/article/10.1007/s00146-023-01635-y|hdl=11585/914099 |hdl-access=free }}
Efforts to enhance AI safety include frameworks designed to align AI outputs with ethical guidelines and reduce risks like misuse and data leakage. Tools such as Nvidia's Guardrails,{{cite web |title=NeMo Guardrails |url=https://github.com/NVIDIA/NeMo-Guardrails |access-date=2024-12-08 |website=NVIDIA NeMo Guardrails}} Llama Guard,{{cite web |title=Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations |url=https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/ |access-date=2024-12-08 |website=Meta AI}} Preamble's customizable guardrails{{cite arXiv |last1=Šekrst |first1=Kristina |last2=McHugh |first2=Jeremy |last3=Cefalu |first3=Jonathan Rodriguez |year=2024 |title=AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development |eprint=2411.14442 |class=cs.CY}} and Claude’s Constitution mitigate vulnerabilities like prompt injection and ensure outputs adhere to predefined principles. These frameworks are often integrated into AI systems to improve safety and reliability.{{cite arXiv |eprint=2402.01822 |class=cs |first1=Yi |last1=Dong |first2=Ronghui |last2=Mu |title=Building Guardrails for Large Language Models |last3=Jin |first3=Gaojie |last4=Qi |first4=Yi |last5=Hu |first5=Jinwei |last6=Zhao |first6=Xingyu |last7=Meng |first7=Jie |last8=Ruan |first8=Wenjie |last9=Huang |first9=Xiaowei |year=2024}}
= Philosophical perspectives =
{{See also|Ethics of artificial intelligence}}
The field of AI safety is deeply intertwined with philosophical considerations, particularly in the realm of ethics. Deontological ethics, which emphasizes adherence to moral rules, has been proposed as a framework for aligning AI systems with human values. By embedding deontological principles, AI systems can be guided to avoid actions that cause harm, ensuring their operations remain within ethical boundaries.{{cite journal |last1=D’Alessandro |first1=W. |date=2024 |title=Deontology and safe artificial intelligence |journal=Philosophical Studies|doi=10.1007/s11098-024-02174-y |doi-access=free }}
= Scaling local measures to global solutions =
In addressing the AI safety problem it is important to stress the distinction between local and global solutions. Local solutions focus on individual AI systems, ensuring they are safe and beneficial, while global solutions seek to implement safety measures for all AI systems across various jurisdictions. Some researchers{{Cite journal |last1=Turchin |first1=Alexey |last2=Dench |first2=David |last3=Green |first3=Brian Patrick |title=Global Solutions vs. Local Solutions for the AI Safety Problem |journal=Big Data and Cognitive Computing |volume=3 |issue=16 |pages=1–25 |year=2019 |doi=10.3390/bdcc3010016 |doi-access=free }} argue for the necessity of scaling local safety measures to a global level, proposing a classification for these global solutions. This approach underscores the importance of collaborative efforts in the international governance of AI safety, emphasizing that no single entity can effectively manage the risks associated with AI technologies. This perspective aligns with ongoing efforts in international policy-making and regulatory frameworks, which aim to address the complex challenges posed by advanced AI systems worldwide.{{Cite news |last=Ziegler |first=Bart |date=8 April 2022 |title=Is It Time to Regulate AI? |work=Wall Street Journal}}{{Cite news |last=Smith |first=John |date=15 May 2022 |title=Global Governance of Artificial Intelligence: Opportunities and Challenges |work=The Guardian}}
= Government action =
{{See also|Regulation of artificial intelligence}}
Some experts have argued that it is too early to regulate AI, expressing concerns that regulations will hamper innovation and it would be foolish to "rush to regulate in ignorance".{{Cite news |last=Ziegler |first=Bart |date=8 April 2022 |title=Is It Time to Regulate AI? |work=Wall Street Journal |url=https://www.wsj.com/articles/is-it-time-to-regulate-ai-11649433600 |url-status=live |accessdate=2022-11-24 |archive-url=https://web.archive.org/web/20221124125645/https://www.wsj.com/articles/is-it-time-to-regulate-ai-11649433600 |archive-date=2022-11-24}}{{Cite journal |last=Reed |first=Chris |date=2018-09-13 |title=How should we regulate artificial intelligence? |journal=Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences |language=en |volume=376 |issue=2128 |pages=20170360 |doi=10.1098/rsta.2017.0360 |issn=1364-503X |pmc=6107539 |pmid=30082306|bibcode=2018RSPTA.37670360R }} Others, such as business magnate Elon Musk, call for pre-emptive action to mitigate catastrophic risks.{{Cite web| last = Belton| first = Keith B.| title = How Should AI Be Regulated?| work = IndustryWeek| accessdate = 2022-11-24| date = 2019-03-07| url = https://www.industryweek.com/technology-and-iiot/article/22027274/how-should-ai-be-regulated| archive-date = 2022-01-29| archive-url = https://web.archive.org/web/20220129114109/https://www.industryweek.com/technology-and-iiot/article/22027274/how-should-ai-be-regulated| url-status = live}}
Outside of formal legislation, government agencies have put forward ethical and safety recommendations. In March 2021, the US National Security Commission on Artificial Intelligence reported that advances in AI may make it increasingly important to "assure that systems are aligned with goals and values, including safety, robustness and trustworthiness".{{Citation| last = National Security Commission on Artificial Intelligence| title = Final Report| date = 2021}} Subsequently, the National Institute of Standards and Technology drafted a framework for managing AI Risk, which advises that when "catastrophic risks are present – development and deployment should cease in a safe manner until risks can be sufficiently managed".{{Cite journal| last = National Institute of Standards and Technology| title = AI Risk Management Framework| journal = NIST| accessdate = 2022-11-24| date = 2021-07-12| url = https://www.nist.gov/itl/ai-risk-management-framework| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124130402/https://www.nist.gov/itl/ai-risk-management-framework| url-status = live}}
In September 2021, the People's Republic of China published ethical guidelines for the use of AI in China, emphasizing that AI decisions should remain under human control and calling for accountability mechanisms. In the same month, The United Kingdom published its 10-year National AI Strategy,{{Cite web| last = Richardson| first = Tim| title = Britain publishes 10-year National Artificial Intelligence Strategy| accessdate = 2022-11-24| date = 2021| url = https://www.theregister.com/2021/09/22/uk_10_year_national_ai_strategy/| archive-date = 2023-02-10| archive-url = https://web.archive.org/web/20230210114137/https://www.theregister.com/2021/09/22/uk_10_year_national_ai_strategy/| url-status = live}} which states the British government "takes the long-term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously".{{Cite web |date=2021 |title=Guidance: National AI Strategy |url=https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version |url-status=live |archive-url=https://web.archive.org/web/20230210114139/https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version |archive-date=2023-02-10 |accessdate=2022-11-24 |work=GOV.UK}} The strategy describes actions to assess long-term AI risks, including catastrophic risks. The British government held first major global summit on AI safety. This took place on the 1st and 2 November 2023 and was described as "an opportunity for policymakers and world leaders to consider the immediate and future risks of AI and how these risks can be mitigated via a globally coordinated approach".{{Cite web |last=Hardcastle |first=Kimberley |date=2023-08-23 |title=We're talking about AI a lot right now – and it's not a moment too soon |url=http://theconversation.com/were-talking-about-ai-a-lot-right-now-and-its-not-a-moment-too-soon-211448 |access-date=2023-10-31 |website=The Conversation |language=en-US}}{{Cite web |title=Iconic Bletchley Park to host UK AI Safety Summit in early November |url=https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november |access-date=2023-10-31 |website=GOV.UK |language=en}}
Government organizations, particularly in the United States, have also encouraged the development of technical AI safety research. The Intelligence Advanced Research Projects Activity initiated the TrojAI project to identify and protect against Trojan attacks on AI systems.{{Cite web |last1=Office of the Director of National Intelligence, Intelligence Advanced Research Projects Activity |title=IARPA – TrojAI |url=https://www.iarpa.gov/research-programs/trojai |url-status=live |archive-url=https://web.archive.org/web/20221124131956/https://www.iarpa.gov/research-programs/trojai |archive-date=2022-11-24 |accessdate=2022-11-24}} The DARPA engages in research on explainable artificial intelligence and improving robustness against adversarial attacks.{{Cite web| last = Turek| first = Matt| title = Explainable Artificial Intelligence| accessdate = 2022-11-24| url = https://www.darpa.mil/program/explainable-artificial-intelligence| archive-date = 2021-02-19| archive-url = https://web.archive.org/web/20210219210013/https://www.darpa.mil/program/explainable-artificial-intelligence| url-status = live}}{{Cite web| last = Draper| first = Bruce| title = Guaranteeing AI Robustness Against Deception| work = Defense Advanced Research Projects Agency| accessdate = 2022-11-24| url = https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception| archive-date = 2023-01-09| archive-url = https://web.archive.org/web/20230109021433/https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception| url-status = live}} And the National Science Foundation supports the Center for Trustworthy Machine Learning, and is providing millions of dollars in funding for empirical AI safety research.{{Cite web | last = National Science Foundation | title = Safe Learning-Enabled Systems | date = 23 February 2023 | accessdate = 2023-02-27 | url = https://beta.nsf.gov/funding/opportunities/safe-learning-enabled-systems | archive-date = 2023-02-26 | archive-url = https://web.archive.org/web/20230226190627/https://beta.nsf.gov/funding/opportunities/safe-learning-enabled-systems | url-status = live }}
In 2024, the United Nations General Assembly adopted the first global resolution on the promotion of “safe, secure and trustworthy” AI systems that emphasized the respect, protection and promotion of human rights in the design, development, deployment and the use of AI.{{cite news|title=General Assembly adopts landmark resolution on artificial intelligence |date=21 March 2024 |url=https://news.un.org/en/story/2024/03/1147831 |website=UN News |archive-url=https://web.archive.org/web/20240420010734/https://news.un.org/en/story/2024/03/1147831 |archive-date=20 April 2024 |access-date=21 April 2024}}
In May 2024, the Department for Science, Innovation and Technology (DSIT) announced £8.5 million in funding for AI safety research under the Systemic AI Safety Fast Grants Programme, led by Christopher Summerfield and Shahar Avin at the AI Safety Institute, in partnership with UK Research and Innovation. Technology Secretary Michelle Donelan announced the plan at the AI Seoul Summit, stating the goal was to make AI safe across society and that promising proposals could receive further funding. The UK also signed an agreement with 10 other countries and the EU to form an international network of AI safety institutes to promote collaboration and share information and resources. Additionally, the UK AI Safety Institute planned to open an office in San Francisco.{{cite news|last=Say |first=Mark |title=DSIT announces funding for research on AI safety |date=23 May 2024 |url=https://www.ukauthority.com/articles/dsit-announces-funding-for-research-on-ai-safety/ |archive-url=https://web.archive.org/web/20240524232313/https://www.ukauthority.com/articles/dsit-announces-funding-for-research-on-ai-safety/ |archive-date=24 May 2024 |access-date=11 June 2024}}
= Corporate self-regulation =
AI labs and companies generally abide by safety practices and norms that fall outside of formal legislation.{{Cite journal |last1=Mäntymäki |first1=Matti |last2=Minkkinen |first2=Matti |last3=Birkstedt |first3=Teemu |last4=Viljanen |first4=Mika |date=2022 |title=Defining organizational AI governance |journal=AI and Ethics |language=en |volume=2 |issue=4 |pages=603–609 |doi=10.1007/s43681-022-00143-x |s2cid=247119668 |issn=2730-5953 |doi-access=free }} One aim of governance researchers is to shape these norms. Examples of safety recommendations found in the literature include performing third-party auditing,{{Cite journal |last1=Brundage |first1=Miles |last2=Avin |first2=Shahar |last3=Wang |first3=Jasmine |last4=Belfield |first4=Haydn |last5=Krueger |first5=Gretchen |last6=Hadfield |first6=Gillian |last7=Khlaaf |first7=Heidy |last8=Yang |first8=Jingying |last9=Toner |first9=Helen |last10=Fong |first10=Ruth |last11=Maharaj |first11=Tegan |last12=Koh |first12=Pang Wei |last13=Hooker |first13=Sara |last14=Leung |first14=Jade |last15=Trask |first15=Andrew |date=2020-04-20 |title=Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims |arxiv=2004.07213 }} offering bounties for finding failures, sharing AI incidents (an AI incident database was created for this purpose),{{Cite web| title = Welcome to the Artificial Intelligence Incident Database| accessdate = 2022-11-24| url = https://incidentdatabase.ai/| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124132715/https://incidentdatabase.ai/| url-status = live}} following guidelines to determine whether to publish research or models, and improving information and cyber security in AI labs.{{Cite web| last1 = Wiblin| first1 = Robert| last2 = Harris| first2 = Keiran| title = Nova DasSarma on why information security may be critical to the safe development of AI systems| work = 80,000 Hours| accessdate = 2022-11-24| date = 2022| url = https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/| archive-date = 2022-11-24| archive-url = https://web.archive.org/web/20221124132927/https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/| url-status = live}}
Companies have also made commitments. Cohere, OpenAI, and AI21 proposed and agreed on "best practices for deploying language models", focusing on mitigating misuse.{{Cite web| last = OpenAI| title = Best Practices for Deploying Language Models| work = OpenAI| accessdate = 2022-11-24| date = 2022-06-02| url = https://openai.com/blog/best-practices-for-deploying-language-models/| archive-date = 2023-03-15| archive-url = https://web.archive.org/web/20230315184334/https://openai.com/blog/best-practices-for-deploying-language-models/| url-status = live}} To avoid contributing to racing-dynamics, OpenAI has also stated in their charter that "if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project"{{Cite web| last = OpenAI| title = OpenAI Charter| work = OpenAI| accessdate = 2022-11-24| url = https://openai.com/charter/| archive-date = 2021-03-04| archive-url = https://web.archive.org/web/20210304235618/https://openai.com/charter/| url-status = live}} Also, industry leaders such as CEO of DeepMind Demis Hassabis, director of Facebook AI Yann LeCun have signed open letters such as the Asilomar Principles and the Autonomous Weapons Open Letter.{{Cite web| last = Future of Life Institute| title = Autonomous Weapons Open Letter: AI & Robotics Researchers| work = Future of Life Institute| accessdate = 2022-11-24| date = 2016| url = https://invisiosolutions.com/navigating-the-london-web-space-ai-website-builders-vs-local-web-development-companies/| archive-date = 2023-09-22| archive-url = https://web.archive.org/web/20230922183710/https://invisiosolutions.com/navigating-the-london-web-space-ai-website-builders-vs-local-web-development-companies/| url-status = dead}}
See also
References
{{Reflist}}
External links
- Unsolved Problems in ML Safety
- On the Opportunities and Risks of Foundation Models
- An Overview of Catastrophic AI Risks
- [https://cset.georgetown.edu/publication/ai-accidents-an-emerging-threat/ AI Accidents: An Emerging Threat]
- [https://mitpress.mit.edu/9780262533690/engineering-a-safer-world/ Engineering a Safer World]
{{Existential risk from artificial intelligence|state=expanded}}
Category:Artificial intelligence
Category:Existential risk from artificial general intelligence