Regulation of artificial intelligence
{{Short description|Guidelines and laws to regulate AI}}
{{Computing law}}
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms.{{cite journal |last1=Cath |first1=Corinne |title=Governing artificial intelligence: ethical, legal and technical opportunities and challenges |journal=Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences |date=2018 |volume=376 |issue=2133 |pages=20180080 |doi=10.1098/rsta.2018.0080 |pmid=30322996 |pmc=6191666 |bibcode=2018RSPTA.37680080C |doi-access=free}}{{cite arXiv |eprint=2005.11072 |first1=Olivia J. |last1=Erdélyi |first2=Judy |last2=Goldsmith |title=Regulating Artificial Intelligence: Proposal for a Global Solution |date=2020|class=cs.CY }} The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.{{cite journal |last1=Tallberg |first1=Jonas | last2=Erman |first2= Eva|last3=Furendal |first3=Markus|first4=Johannes|last4=Geith|first5=Mark|last5=Klamberg|first6=Magnus|last6=Lundgren |title=Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research |journal=International Studies Review |date=2023 |volume=25 |issue=3 |doi=10.1093/isr/viad040|doi-access=free|arxiv=2305.11528 }}
Since 2016, numerous AI ethics guidelines have been published in order to maintain social control over the technology.{{cite journal |last1=Héder |first1=M |title=A criticism of AI ethics guidelines |journal=Információs Társadalom |date=2020 |volume=20 |issue=4 |pages=57–73 |doi=10.22503/inftars.XX.2020.4.5|s2cid=233252939 |doi-access=free }} Regulation is deemed necessary to both foster AI innovation and manage associated risks.
Furthermore, organizations deploying AI have a central role to play in creating and implementing trustworthy AI, adhering to established principles, and taking accountability for mitigating risks.{{Cite journal |last1=Curtis |first1=Caitlin |last2=Gillespie |first2=Nicole |last3=Lockey |first3=Steven |date=2022-05-24 |title=AI-deploying organizations are key to addressing 'perfect storm' of AI risks |url=https://doi.org/10.1007/s43681-022-00163-7 |journal=AI and Ethics |volume=3 |issue=1 |pages=145–153 |language=en |doi=10.1007/s43681-022-00163-7 |pmid=35634256 |issn=2730-5961 |pmc=9127285 |access-date=2022-05-30 |archive-date=2023-03-15 |archive-url=https://web.archive.org/web/20230315194711/https://link.springer.com/article/10.1007/s43681-022-00163-7 |url-status=live }}
Regulating AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.{{cite news|title=An Ethical Approach to AI is an Absolute Imperative, Andreas Kaplan|language=en|url=https://olbios.org/an-ethical-approach-to-ai-is-an-absolute-imperative/|access-date=26 April 2021|archive-date=17 December 2019|archive-url=https://web.archive.org/web/20191217072834/https://olbios.org/an-ethical-approach-to-ai-is-an-absolute-imperative/|url-status=live}}{{Cite journal |last1=Sotala|first1=Kaj|last2=Yampolskiy|first2=Roman V |date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta |volume=90|issue=1|page=018001|doi=10.1088/0031-8949/90/1/018001 |issn=0031-8949|doi-access=free|bibcode=2015PhyS...90a8001S }}
Background
According to Stanford University's 2023 AI Index, the annual number of bills mentioning "artificial intelligence" passed in 127 surveyed countries jumped from one in 2016 to 37 in 2022.{{cite web |date=2023 |title=Artificial Intelligence Index Report 2023/Chapter 6: Policy and Governance |url=https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report-2023_CHAPTER_6-1.pdf |access-date=19 June 2023 |publisher=AI Index |archive-date=19 June 2023 |archive-url=https://web.archive.org/web/20230619013609/https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report-2023_CHAPTER_6-1.pdf |url-status=live }}
In 2017, Elon Musk called for regulation of AI development.{{cite news|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|language=en|work=NPR.org|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|access-date=27 November 2017|archive-date=23 April 2020|archive-url=https://web.archive.org/web/20200423135755/https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|url-status=live}} According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization." In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.{{cite news|last1=Gibbs|first1=Samuel|date=17 July 2017|title=Elon Musk: regulate AI to combat 'existential threat' before it's too late|work=The Guardian |url=https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|access-date=27 November 2017|archive-date=6 June 2020|archive-url=https://web.archive.org/web/20200606072024/https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|url-status=live}} Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology.{{cite news|last1=Kharpal|first1=Arjun|date=7 November 2017|title=A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says|work=CNBC|url=https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|access-date=27 November 2017|archive-date=22 March 2020|archive-url=https://web.archive.org/web/20200322115325/https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|url-status=live}} Many tech companies oppose the harsh regulation of AI and "While some of the companies have said they welcome rules around A.I., they have also argued against tough regulations akin to those being created in Europe" {{Cite journal |last=Chamberlain |first=Johanna |date=March 2023 |title=The Risk-Based Approach of the European Union's Proposed Artificial Intelligence Regulation: Some Comments from a Tort Law Perspective |url=https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/riskbased-approach-of-the-european-unions-proposed-artificial-intelligence-regulation-some-comments-from-a-tort-law-perspective/A996034CC512B6B8A77B73FE39E77DAE |journal=European Journal of Risk Regulation |language=en |volume=14 |issue=1 |pages=1–13 |doi=10.1017/err.2022.38 |issn=1867-299X |access-date=2024-03-12 |archive-date=2024-03-12 |archive-url=https://web.archive.org/web/20240312211656/https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/riskbased-approach-of-the-european-unions-proposed-artificial-intelligence-regulation-some-comments-from-a-tort-law-perspective/A996034CC512B6B8A77B73FE39E77DAE |url-status=live }} Instead of trying to regulate the technology itself, some scholars suggested developing common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.{{cite journal|last1=Kaplan|first1=Andreas|last2=Haenlein|first2=Michael|year=2019|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|journal=Business Horizons|volume=62|pages=15–25|doi=10.1016/j.bushor.2018.08.004|s2cid=158433736 }}
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".{{cite news |last1=Vincent |first1=James |title=AI is entering an era of corporate control |url=https://www.theverge.com/23667752/ai-progress-2023-report-stanford-corporate-control |access-date=19 June 2023 |work=The Verge |date=3 April 2023 |archive-date=19 June 2023 |archive-url=https://web.archive.org/web/20230619005803/https://www.theverge.com/23667752/ai-progress-2023-report-stanford-corporate-control |url-status=live }} A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.{{cite news |last1=Edwards |first1=Benj |title=Poll: AI poses risk to humanity, according to majority of Americans |url=https://arstechnica.com/information-technology/2023/05/poll-61-of-americans-say-ai-threatens-humanitys-future/ |access-date=19 June 2023 |work=Ars Technica |date=17 May 2023 |language=en-us |archive-date=19 June 2023 |archive-url=https://web.archive.org/web/20230619013608/https://arstechnica.com/information-technology/2023/05/poll-61-of-americans-say-ai-threatens-humanitys-future/ |url-status=live }} In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".{{cite news |last1=Kasperowicz |first1=Peter |title=Regulate AI? GOP much more skeptical than Dems that government can do it right: poll |url=https://www.foxnews.com/politics/regulate-ai-gop-much-more-skeptical-than-dems-that-the-government-can-do-it-right-poll |access-date=19 June 2023 |work=Fox News |date=1 May 2023 |archive-date=19 June 2023 |archive-url=https://web.archive.org/web/20230619013616/https://www.foxnews.com/politics/regulate-ai-gop-much-more-skeptical-than-dems-that-the-government-can-do-it-right-poll |url-status=live }}{{cite web |title=Fox News Poll |url=https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_April-21-24-2023_Complete_National_Topline_May-1-Release.pdf |publisher=Fox News |access-date=19 June 2023 |date=2023 |archive-date=12 May 2023 |archive-url=https://web.archive.org/web/20230512082712/https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_April-21-24-2023_Complete_National_Topline_May-1-Release.pdf |url-status=live }}
Perspectives
The regulation of artificial intelligences is the development of public sector policies and laws for promoting and regulating AI.{{Cite book |last1=Barfield |first1=Woodrow |title=Research handbook on the law of artificial intelligence |last2=Pagallo |first2=Ugo |publisher=Edward Elgar Publishing |year=2018 |isbn=978-1-78643-904-8 |location=Cheltenham, UK |oclc=1039480085}} Regulation is now generally considered necessary to both encourage AI and manage associated risks.{{Cite journal|last1=Wirtz|first1=Bernd W.|last2=Weyerer|first2=Jan C.|last3=Geyer|first3=Carolin|s2cid=158829602|date=2018-07-24|title=Artificial Intelligence and the Public Sector—Applications and Challenges|journal=International Journal of Public Administration|volume=42|issue=7|pages=596–615|doi=10.1080/01900692.2018.1498103|issn=0190-0692|url=https://zenodo.org/record/3569435|access-date=2020-08-17|archive-date=2020-08-18|archive-url=https://web.archive.org/web/20200818131415/https://zenodo.org/record/3569435|url-status=live}}{{cite journal |last1=Buiten |first1=Miriam C. |date=2019 |title=Towards Intelligent Regulation of Artificial Intelligence |journal=European Journal of Risk Regulation |volume=10 |issue=1 |pages=41–59 |doi=10.1017/err.2019.8 |doi-access=free}}{{Cite journal|last1=Mantelero|first1=Alessandro|last2=Esposito|first2=Maria Samantha|date=2021|title=An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems|journal=Computer Law & Security Review|language=en|volume=41|page=105561|doi=10.1016/j.clsr.2021.105561| issn=0267-3649|s2cid=237588123|doi-access=free|arxiv=2407.20951}} Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems,{{Cite book|title=Artificial intelligence in society.|publisher=Organisation for Economic Co-operation and Development.|date = 11 June 2019|isbn=978-92-64-54519-9|location=Paris|oclc=1105926611}} although regulation of artificial superintelligences is also considered.{{Citation|last1=Kamyshansky|first1=Vladimir P.|title=Revisiting the Place of Artificial Intelligence in Society and the State|date=2020|work=Artificial Intelligence: Anthropogenic Nature vs. Social Origin|pages=359–364|place=Cham|publisher=Springer International Publishing|isbn=978-3-030-39318-2|last2=Rudenko|first2=Evgenia Y.|last3=Kolomiets|first3=Evgeniy A.|last4=Kripakova|first4=Dina R.|series=Advances in Intelligent Systems and Computing |volume=1100 |doi=10.1007/978-3-030-39319-9_41|s2cid=213070224}} The basic approach to regulation focuses on the risks and biases of machine-learning algorithms, at the level of the input data, algorithm testing, and decision model. It also focuses on the explainability of the outputs.
There have been both hard law and soft law proposals to regulate AI.{{cite journal |title=Special Issue on Soft Law Governance of Artificial Intelligence: IEEE Technology and Society Magazine publication information |journal=IEEE Technology and Society Magazine |date=December 2021 |volume=40 |issue=4 |pages=C2 |doi=10.1109/MTS.2021.3126194 |doi-access=free }} Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges.{{cite web |last1=Marchant |first1=Gary |title="Soft Law" Governance of AI |url=https://escholarship.org/content/qt0jq252ks/qt0jq252ks.pdf |website=AI Pulse |publisher=AI PULSE Papers |access-date=28 February 2023 |archive-date=21 March 2023 |archive-url=https://web.archive.org/web/20230321194721/https://escholarship.org/content/qt0jq252ks/qt0jq252ks.pdf |url-status=live }}{{cite journal |last1=Johnson |first1=Walter G. |last2=Bowman |first2=Diana M. |title=A Survey of Instruments and Institutions Available for the Global Governance of Artificial Intelligence |journal=IEEE Technology and Society Magazine |date=December 2021 |volume=40 |issue=4 |pages=68–76 |doi=10.1109/MTS.2021.3123745|s2cid=245053179 }} Among the challenges, AI technology is rapidly evolving leading to a "pacing problem" where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits. Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope. As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising because soft laws can be adapted more flexibly to meet the needs of emerging and evolving AI technology and nascent applications. However, soft law approaches often lack substantial enforcement potential.{{cite journal |last1=Sutcliffe |first1=Hillary R. |last2=Brown |first2=Samantha |title=Trust and Soft Law for AI |journal=IEEE Technology and Society Magazine |date=December 2021 |volume=40 |issue=4 |pages=14–24 |doi=10.1109/MTS.2021.3123741|s2cid=244955938 }}
Cason Schmit, Megan Doerr, and Jennifer Wagner proposed the creation of a quasi-governmental regulator by leveraging intellectual property rights (i.e., copyleft licensing) in certain AI objects (i.e., AI models and training datasets) and delegating enforcement rights to a designated enforcement entity.{{cite journal |last1=Schmit |first1=C. D. |last2=Doerr |first2=M. J. |last3=Wagner |first3=J. K. |title=Leveraging IP for AI governance |journal=Science |date=17 February 2023 |volume=379 |issue=6633 |pages=646–648 |doi=10.1126/science.add2202|pmid=36795826 |bibcode=2023Sci...379..646S |s2cid=256901479 }} They argue that AI can be licensed under terms that require adherence to specified ethical practices and codes of conduct. (e.g., soft law principles).
Prominent youth organizations focused on AI, namely Encode Justice, have also issued comprehensive agendas calling for more stringent AI regulations and public-private partnerships.{{Cite news |last=Lima-Strong |first=Cristiano |date=16 May 2024 |title=Youth activists call on world leaders to set AI safeguards by 2030 |url=https://www.washingtonpost.com/politics/2024/05/16/youth-activists-call-world-leaders-set-ai-safeguards-by-2030/ |access-date=24 June 2024 |newspaper=Washington Post}}{{Cite web |last=Haldane |first=Matt |date=21 May 2024 |title=Student AI activists at Encode Justice release 22 goals for 2030 ahead of global summit in Seoul |url=https://www.scmp.com/tech/policy/article/3263482/student-ai-activists-encode-justice-release-22-goals-2030-ahead-global-summit-seoul |access-date=24 June 2024 |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925043107/https://www.scmp.com/tech/policy/article/3263482/student-ai-activists-encode-justice-release-22-goals-2030-ahead-global-summit-seoul |url-status=live }}
AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values.{{Cite report|publisher=Berkman Klein Center for Internet & Society|last1=Fjeld|first1=Jessica|last2=Achten|first2=Nele|last3=Hilligoss|first3=Hannah|last4=Nagy|first4=Adam|last5=Srikumar|first5=Madhu|date=2020-01-15|title=Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI|url=https://dash.harvard.edu/handle/1/42160420|language=en-US|access-date=2021-07-04|archive-date=2021-07-16|archive-url=https://web.archive.org/web/20210716201519/https://dash.harvard.edu/handle/1/42160420|url-status=live}} AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.{{Cite journal|last1=Wirtz|first1=Bernd W.|last2=Weyerer|first2=Jan C.|last3=Sturm|first3=Benjamin J.|s2cid=218807452|date=2020-04-15|title=The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration|journal=International Journal of Public Administration|volume=43|issue=9|pages=818–829|doi=10.1080/01900692.2020.1749851|issn=0190-0692}} The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national,{{Cite journal|last=Bredt|first=Stephan|date=2019-10-04|title=Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies|journal=Frontiers in Artificial Intelligence|volume=2|page=16|doi=10.3389/frai.2019.00016|pmid=33733105|pmc=7861258|issn=2624-8212|doi-access=free}} and international levels{{Cite book |url=https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf |title=White Paper: On Artificial Intelligence – A European approach to excellence and trust |publisher=European Commission |year=2020 |location=Brussels |page=1}} and in a variety of fields, from public service management{{Cite journal|last1=Wirtz|first1=Bernd W.|last2=Müller|first2=Wilhelm M.|s2cid=158267709|date=2018-12-03|title=An integrated artificial intelligence framework for public management|journal=Public Management Review|volume=21|issue=7|pages=1076–1100|doi=10.1080/14719037.2018.1549268|issn=1471-9037}} and accountability{{Cite book|last1=Reisman|first1=Dillon|url=https://ainowinstitute.org/aiareport2018.pdf|title=Algorithmic impact assessments: A practical framework for public agency accountability|last2=Schultz|first2=Jason|last3=Crawford|first3=Kate|last4=Whittaker|first4=Meredith|publisher=AI Now Institute|year=2018|location=New York|access-date=2020-04-28|archive-date=2020-06-14|archive-url=https://web.archive.org/web/20200614205833/https://ainowinstitute.org/aiareport2018.pdf|url-status=dead}} to law enforcement,{{Cite web |last= |date=July 2020 |title=Towards Responsible Artificial Intelligence Innovation |url=https://unicri.it/towards-responsible-artificial-intelligence-innovation |access-date=2022-07-18 |website=UNICRI |archive-date=2022-07-05 |archive-url=https://web.archive.org/web/20220705092745/https://unicri.it/towards-responsible-artificial-intelligence-innovation |url-status=live }} healthcare (especially the concept of a Human Guarantee),{{Cite journal|last1=Kohli|first1=Ajay|last2=Mahajan|first2=Vidur|last3=Seals|first3=Kevin|last4=Kohli|first4=Ajit|last5=Jha|first5=Saurabh|date=2019|title=Concepts in U.S. Food and Drug Administration Regulation of Artificial Intelligence for Medical Imaging|url=http://dx.doi.org/10.2214/ajr.18.20410|journal=American Journal of Roentgenology|volume=213|issue=4|pages=886–888|doi=10.2214/ajr.18.20410|pmid=31166758|s2cid=174813195|issn=0361-803X|access-date=2021-03-27|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925043004/https://www.ajronline.org/doi/10.2214/AJR.18.20410|url-status=live|url-access=subscription}}{{Cite journal|last1=Hwang|first1=Thomas J.|last2=Kesselheim|first2=Aaron S.|last3=Vokinger|first3=Kerstin N.|date=2019-12-17|title=Lifecycle Regulation of Artificial Intelligence– and Machine Learning–Based Software Devices in Medicine|url=http://dx.doi.org/10.1001/jama.2019.16842|journal=JAMA|volume=322|issue=23|pages=2285–2286|doi=10.1001/jama.2019.16842|pmid=31755907|s2cid=208230202|issn=0098-7484|access-date=2021-03-27|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925043009/https://jamanetwork.com/journals/jama/article-abstract/2756194|url-status=live|url-access=subscription}}{{Cite journal|last1=Sharma|first1=Kavita|last2=Manchikanti|first2=Padmavati|date=2020-10-01|title=Regulation of Artificial Intelligence in Drug Discovery and Health Care|url=http://dx.doi.org/10.1089/blr.2020.29183.ks|journal=Biotechnology Law Report|volume=39|issue=5|pages=371–380|doi=10.1089/blr.2020.29183.ks|s2cid=225540889|issn=0730-031X|access-date=2021-03-27|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925043006/https://www.liebertpub.com/doi/10.1089/blr.2020.29183.ks|url-status=live|url-access=subscription}}{{Cite journal|last1=Petkus|first1=Haroldas|last2=Hoogewerf|first2=Jan|last3=Wyatt|first3=Jeremy C|date=2020|title=What do senior physicians think about AI and clinical decision support systems: Quantitative and qualitative analysis of data from specialty societies|url= |journal=Clinical Medicine|volume=20|issue=3|pages=324–328|doi=10.7861/clinmed.2019-0317|pmid=32414724|pmc=7354034|issn=1470-2118}}{{Cite journal |last1=Cheng |first1=Jerome Y. |last2=Abel |first2=Jacob T. |last3=Balis |first3=Ulysses G.J. |last4=McClintock |first4=David S. |last5=Pantanowitz |first5=Liron |date=2021 |title=Challenges in the Development, Deployment, and Regulation of Artificial Intelligence in Anatomic Pathology |journal=The American Journal of Pathology |volume=191 |issue=10 |pages=1684–1692 |doi=10.1016/j.ajpath.2020.10.018 |pmid=33245914 |s2cid=227191875 |issn=0002-9440|doi-access=free }} the financial sector, robotics,{{Cite journal|last1=Gurkaynak|first1=Gonenc|last2=Yilmaz|first2=Ilay|last3=Haksever|first3=Gunes|date=2016|title=Stifling artificial intelligence: Human perils|journal=Computer Law & Security Review|volume=32|issue=5|pages=749–758|doi=10.1016/j.clsr.2016.05.003|issn=0267-3649}}{{Cite journal|last1=Iphofen|first1=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|volume=16|issue=2|pages=170–184|doi=10.1080/21582041.2018.1563803|s2cid=59298502|issn=2158-2041}} autonomous vehicles, the military{{Cite book|url=https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF|title=AI principles: Recommendations on the ethical use of artificial intelligence by the Department of Defense|publisher=United States Defense Innovation Board|year=2019|location=Washington, DC|oclc=1126650738|access-date=2020-03-28|archive-date=2020-01-14|archive-url=https://web.archive.org/web/20200114222649/https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF|url-status=dead}} and national security,{{Cite book|last1=Babuta|first1=Alexander|url=https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf|title=Artificial Intelligence and UK National Security: Policy Considerations|last2=Oswald|first2=Marion|last3=Janjeva|first3=Ardi|publisher=Royal United Services Institute|year=2020|location=London|access-date=2020-04-28|archive-date=2020-05-02|archive-url=https://web.archive.org/web/20200502044604/https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf|url-status=dead}} and international law.{{cite news|url=https://www.snopes.com/2017/04/21/robots-with-guns/|title=Robots with Guns: The Rise of Autonomous Weapons Systems|date=21 April 2017|work=Snopes.com|access-date=24 December 2017|archive-date=25 September 2024|archive-url=https://web.archive.org/web/20240925043120/https://www.snopes.com/news/2017/04/21/robots-with-guns/|url-status=live}}{{Cite journal|url=https://dash.harvard.edu/handle/1/33813394|title=No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law|last=Bento|first=Lucas|date=2017|website=Harvard Scholarship Depository|access-date=2019-09-14|archive-date=2020-03-23|archive-url=https://web.archive.org/web/20200323111054/https://dash.harvard.edu/handle/1/33813394|url-status=live}}
Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 entitled "Being Human in an Age of AI", calling for a government commission to regulate AI.{{Cite news|last=Kissinger|first=Henry|author-link=Henry Kissinger|date=1 November 2021|title=The Challenge of Being Human in the Age of AI|work=The Wall Street Journal|url=https://www.wsj.com/articles/being-human-artifical-intelligence-ai-chess-antibiotic-philosophy-ethics-bill-of-rights-11635795271|access-date=4 November 2021|archive-date=4 November 2021|archive-url=https://web.archive.org/web/20211104012825/https://www.wsj.com/articles/being-human-artifical-intelligence-ai-chess-antibiotic-philosophy-ethics-bill-of-rights-11635795271|url-status=live}}
= As a response to the AI control problem =
{{Main article|AI control problem}}
Regulation of AI can be seen as positive social means to manage the AI control problem (the need to ensure long-term beneficial AI), with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism techniques like brain-computer interfaces being seen as potentially complementary.{{Cite journal|last1=Barrett|first1=Anthony M.|last2=Baum|first2=Seth D.|date=2016-05-23|title=A model of pathways to artificial superintelligence catastrophe for risk and decision analysis|journal=Journal of Experimental & Theoretical Artificial Intelligence|volume=29|issue=2|pages=397–414|arxiv=1607.07730|doi=10.1080/0952813x.2016.1186228|issn=0952-813X|s2cid=928824}} Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into AI safety, together with the possibility of differential intellectual progress (prioritizing protective strategies over risky strategies in AI development) or conducting international mass surveillance to perform AGI arms control. For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as for addressing other major threats to human well-being, such as subversion of the global financial system, until a true superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger. Regulation of conscious, ethically aware AGIs focuses on how to integrate them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.
Global guidance
File:Agreement with security statements - 2024 AI index.jpgs will be responsible for associated risks (rather than those using it), and that global governance is required to address risks from generative AI.{{Cite web |date=April 2024 |title=AI Index Report 2024 - chapter 3: Responsible AI |url=https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024_Chapter3.pdf |website=aiindex.stanford.edu |access-date=2024-06-07 |archive-date=2024-05-24 |archive-url=https://web.archive.org/web/20240524171550/https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024_Chapter3.pdf |url-status=live }}]]
The development of a global governance board to regulate AI development was suggested at least as early as 2017.{{Cite journal|last1=Boyd|first1=Matthew|last2=Wilson|first2=Nick|date=2017-11-01|title=Rapid developments in Artificial Intelligence: how might the New Zealand government respond?|journal=Policy Quarterly|volume=13|issue=4|doi=10.26686/pq.v13i4.4619|issn=2324-1101|doi-access=free}} In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.{{cite web|url=https://www.canada.ca/en/innovation-science-economic-development/news/2019/05/declaration-of-the-international-panel-on-artificial-intelligence.html|title=Declaration of the International Panel on Artificial Intelligence|last=Innovation|first=Science and Economic Development Canada|date=2019-05-16|website=gcnws|access-date=2020-03-29|archive-date=2020-03-29|archive-url=https://web.archive.org/web/20200329142430/https://www.canada.ca/en/innovation-science-economic-development/news/2019/05/declaration-of-the-international-panel-on-artificial-intelligence.html|url-status=live}} In 2019, the Panel was renamed the Global Partnership on AI.{{Cite magazine|url=https://www.wired.com/story/world-plan-rein-ai-us-doesnt-like/|title=The world has a plan to rein in AI—but the US doesn't like it|date=2020-01-08|magazine=Wired|language=en-GB|access-date=2020-03-29|last1=Simonite|first1=Tom|archive-date=2020-04-18|archive-url=https://web.archive.org/web/20200418101713/https://www.wired.com/story/world-plan-rein-ai-us-doesnt-like/|url-status=live}}{{cite web|url=https://www.informationweek.com/big-data/ai-machine-learning/ai-regulation-has-the-time-arrived/a/d-id/1337099|title=AI Regulation: Has the Time Arrived?|website=InformationWeek|date=24 February 2020|language=en|access-date=2020-03-29|archive-date=2020-05-23|archive-url=https://web.archive.org/web/20200523175719/https://www.informationweek.com/big-data/ai-machine-learning/ai-regulation-has-the-time-arrived/a/d-id/1337099|url-status=live}}
The Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in the OECD Principles on Artificial Intelligence (2019). The 15 founding members of the Global Partnership on Artificial Intelligence are Australia, Canada, the European Union, France, Germany, India, Italy, Japan, the Republic of Korea, Mexico, New Zealand, Singapore, Slovenia, the United States and the UK. In 2023, the GPAI has 29 members.{{Cite web |title=Community |url=https://gpai.ai/community/ |url-status=live |archive-url=https://web.archive.org/web/20230330094049/https://www.gpai.ai/community/ |archive-date=March 30, 2023 |access-date= |website=GPAI}} The GPAI Secretariat is hosted by the OECD in Paris, France. GPAI's mandate covers four themes, two of which are supported by the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, namely, responsible AI and data governance. A corresponding centre of excellence in Paris will support the other two themes on the future of work, and on innovation and commercialization. GPAI also investigated how AI can be leveraged to respond to the COVID-19 pandemic.
The OECD AI Principles{{Cite web |title=AI-Principles Overview |url=https://oecd.ai/en/ai-principles |access-date=2023-10-20 |website=OECD.AI |language=en |archive-date=2023-10-23 |archive-url=https://web.archive.org/web/20231023025653/https://oecd.ai/en/ai-principles |url-status=live }} were adopted in May 2019, and the G20 AI Principles in June 2019.{{Cite book|url=https://www.mofa.go.jp/mofaj/files/000486596.pdf|title=G20 Ministerial Statement on Trade and Digital Economy|publisher=G20|year=2019|location=Tsukuba City, Japan}}{{Cite journal|date=2019-08-21|title=International AI ethics panel must be independent|journal=Nature|language=en|volume=572|issue=7770|page=415|doi=10.1038/d41586-019-02491-x|pmid=31435065|bibcode=2019Natur.572R.415.|doi-access=free}} In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'.{{Cite book|url=http://www3.weforum.org/docs/WEF_Guidelines_for_AI_Procurement.pdf|title=Guidelines for AI Procurement|publisher=World Economic Forum|year=2019|location=Cologny/Geneva|access-date=2020-04-28|archive-date=2020-07-17|archive-url=https://web.archive.org/web/20200717052819/http://www3.weforum.org/docs/WEF_Guidelines_for_AI_Procurement.pdf|url-status=live}} In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.
At the United Nations (UN), several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. In partnership with INTERPOL, UNICRI's Centre issued the report AI and Robotics for Law Enforcement in April 2019{{Cite web |title=High-Level Event: Artificial Intelligence and Robotics – Reshaping the Future of Crime, Terrorism and Security |url=https://unicri.it/news/article/AI_Robotics_Crime_Terrorism_Security |access-date=2022-07-18 |website=UNICRI |archive-date=2022-07-18 |archive-url=https://web.archive.org/web/20220718090808/https://unicri.it/news/article/AI_Robotics_Crime_Terrorism_Security |url-status=live }} and the follow-up report Towards Responsible AI Innovation in May 2020. At UNESCO's Scientific 40th session in November 2019, the organization commenced a two-year process to achieve a "global standard-setting instrument on ethics of artificial intelligence". In pursuit of this goal, UNESCO forums and conferences on AI were held to gather stakeholder views. A draft text of a Recommendation on the Ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and included a call for legislative gaps to be filled.{{Cite book|last1=NíFhaoláin|first1=Labhaoise|url=http://ceur-ws.org/Vol-2771/AICS2020_paper_53.pdf|title=Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe|last2=Hines|first2=Andrew|last3=Nallur|first3=Vivek|publisher=Technological University Dublin, School of Computer Science, Dublin|year=2020|location=Dublin|pages=1–12|access-date=2021-03-27|archive-date=2021-01-15|archive-url=https://web.archive.org/web/20210115203018/http://ceur-ws.org/Vol-2771/AICS2020_paper_53.pdf|url-status=live}}{{CC-notice|cc=by4}} (The CC BY 4.0 licence means that everyone have the right to reuse the text that is quoted here, or other parts of the original article itself, if they credit the authors. More info: Creative Commons license) Changes were made as follows: citations removed and minor grammatical amendments. UNESCO tabled the international instrument on the ethics of AI for adoption at its General Conference in November 2021;{{cite book |title=UNESCO Science Report: the Race Against Time for Smarter Development. |date=11 June 2021 |publisher=UNESCO |location=Paris |isbn=978-92-3-100450-6 |url=https://unesdoc.unesco.org/ark:/48223/pf0000377433/PDF/377433eng.pdf.multi |access-date=18 September 2021 |archive-date=18 June 2022 |archive-url=https://web.archive.org/web/20220618233752/https://unesdoc.unesco.org/ark:/48223/pf0000377433/PDF/377433eng.pdf.multi |url-status=live }} this was subsequently adopted.{{Cite web |last= |date=2020-02-27 |title=Recommendation on the ethics of artificial intelligence |url=https://en.unesco.org/artificial-intelligence/ethics |access-date=2022-07-18 |website=UNESCO |language=en |archive-date=2022-07-18 |archive-url=https://web.archive.org/web/20220718090856/https://en.unesco.org/artificial-intelligence/ethics |url-status=live }} While the UN is making progress with the global management of AI, its institutional and legal capability to manage the AGI existential risk is more limited.{{Cite journal |last=Nindler |first=Reinmar |date=2019-03-11 |title=The United Nation's Capability to Manage Existential Risks with a Focus on Artificial Intelligence |url=https://brill.com/view/journals/iclr/21/1/article-p5_3.xml |journal=International Community Law Review |volume=21 |issue=1 |pages=5–34 |doi=10.1163/18719732-12341388 |s2cid=150911357 |issn=1871-9740 |access-date=2022-08-30 |archive-date=2022-08-30 |archive-url=https://web.archive.org/web/20220830074017/https://brill.com/view/journals/iclr/21/1/article-p5_3.xml |url-status=live |url-access=subscription }}
An initiative of International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, AI for Good is a global platform which aims to identify practical applications of AI to advance the United Nations Sustainable Development Goals and scale those solutions for global impact. It is an action-oriented, global & inclusive United Nations platform fostering development of AI to positively impact health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities.{{citation needed|date=December 2024}}
Recent research has indicated that countries will also begin to use artificial intelligence as a tool for national cyberdefense. AI is a new factor in the cyber arms industry, as it can be used for defense purposes. Therefore, academics urge that nations should establish regulations for the use of AI, similar to how there are regulations for other military industries.{{Cite journal |last1=Taddeo |first1=Mariarosaria |last2=Floridi |first2=Luciano |date=April 2018 |title=Regulate artificial intelligence to avert cyber arms race |journal=Nature |language=en |volume=556 |issue=7701 |pages=296–298 |doi=10.1038/d41586-018-04602-6|pmid=29662138 |bibcode=2018Natur.556..296T |doi-access=free }}
Regional and national regulation
File:AI Strategic Documents Timeline UNICRI.jpg
The regulatory and policy landscape for AI is an emerging issue in regional and national jurisdictions globally, for example in the European Union{{Cite book|last=Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body.|title=Regulation of artificial intelligence in selected jurisdictions.|lccn=2019668143|oclc=1110727808}} and Russia.{{Citation|last1=Popova|first1=Anna V.|title=The System of Law and Artificial Intelligence in Modern Russia: Goals and Instruments of Digital Modernization|date=2021|url=http://dx.doi.org/10.1007/978-3-030-56433-9_11|pages=89–96|place=Cham|publisher=Springer International Publishing|isbn=978-3-030-56432-2|access-date=2021-03-27|last2=Gorokhova|first2=Svetlana S.|last3=Abramova|first3=Marianna G.|last4=Balashkina|first4=Irina V.|series=Studies in Systems, Decision and Control|volume=314|doi=10.1007/978-3-030-56433-9_11|s2cid=234309883|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925043111/https://link.springer.com/chapter/10.1007/978-3-030-56433-9_11|url-status=live|url-access=subscription}} Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.{{cite web|url=https://oecd-opsi.org/projects/ai/strategies/|title=OECD Observatory of Public Sector Innovation – Ai Strategies and Public Sector Components|date=21 November 2019 |access-date=2020-05-04|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925043510/https://oecd-opsi.org/publications/hello-world-ai/|url-status=live}}{{Cite book|last1=Berryhill|first1=Jamie|url=https://oecd-opsi.org/wp-content/uploads/2019/11/AI-Report-Online.pdf|title=Hello, World: Artificial Intelligence and its Use in the Public Sector|last2=Heang|first2=Kévin Kok|last3=Clogher|first3=Rob|last4=McBride|first4=Keegan|publisher=OECD Observatory of Public Sector Innovation|year=2019|location=Paris|access-date=2020-05-05|archive-date=2019-12-20|archive-url=https://web.archive.org/web/20191220021331/https://oecd-opsi.org/wp-content/uploads/2019/11/AI-Report-Online.pdf|url-status=live}} These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.{{Cite book |last=Campbell |first=Thomas A. |url=http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf |title=Artificial Intelligence: An Overview of State Initiatives |publisher=FutureGrasp, LLC |year=2019 |location=Evergreen, CO |archive-url=https://web.archive.org/web/20200331140959/http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf |archive-date=March 31, 2020 |url-status=dead}}
Different countries have approached the problem in different ways. Regarding the three largest economies, it has been said that "the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach."{{Cite news |last=Bradford |first=Anu |date=2023-06-27 |title=The Race to Regulate Artificial Intelligence |language=en-US |work=Foreign Affairs |url=https://www.foreignaffairs.com/united-states/race-regulate-artificial-intelligence |access-date=2023-08-11 |issn=0015-7120 |archive-date=2023-08-11 |archive-url=https://web.archive.org/web/20230811120428/https://www.foreignaffairs.com/united-states/race-regulate-artificial-intelligence |url-status=live }}
= Australia =
In October 2023, the Australian Computer Society, Business Council of Australia, Australian Chamber of Commerce and Industry, Ai Group (aka Australian Industry Group), Council of Small Business Organisations Australia, and Tech Council of Australia jointly published an open letter calling for a national approach to AI strategy.{{Cite web |title=Australia needs a national approach to AI strategy |url=https://ia.acs.org.au/article/2023/australia-needs-a-national-approach-to-ai-strategy.html |access-date=2023-11-08 |website=Information Age}} The letter backs the federal government establishing a whole-of-government AI taskforce.
Additionally, in August of 2024, the Australian government set a Voluntary AI Safety Standard, which was followed by a Proposals Paper later in September of that year, outlining potential guardrails for high-risk AI that could become mandatory. These guardrails include areas such as model testing, transparency, human oversight, and record-keeping, all of which may be enforced through new legislation. As noted, however, Australia has not yet passed AI-specific laws, but existing statutes such as the Privacy Act 1988, Corporations Act 2001, and Online Safety Act 2021 all have applications which apply to AI use.{{Cite web |date=16 December 2024 |title=AI Watch: Global regulatory tracker - Australia |url=https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia |access-date=May 8, 2025 |website=whitecase.com}}
In September 2024, a bill also was introduced which granted the Australian Communications and Media Authority powers to regulate AI-generated misinformation. Several agencies, including the ACMA, ACCC, and Office of the Australian Information Commissioner, are all expected to play roles in future AI regulation.
= Brazil =
{{Unreferenced section|date=October 2023}}
On September 30, 2021, the Brazilian Chamber of Deputies approved the Brazilian Legal Framework for Artificial Intelligence, Marco Legal da Inteligência Artificial, in regulatory efforts for the development and usage of AI technologies and to further stimulate research and innovation in AI solutions aimed at ethics, culture, justice, fairness, and accountability. This 10 article bill outlines objectives including missions to contribute to the elaboration of ethical principles, promote sustained investments in research, and remove barriers to innovation. Specifically, in article 4, the bill emphasizes the avoidance of discriminatory AI solutions, plurality, and respect for human rights. Furthermore, this act emphasizes the importance of the equality principle in deliberate decision-making algorithms, especially for highly diverse and multiethnic societies like that of Brazil.
When the bill was first released to the public, it faced substantial criticism, alarming the government for critical provisions. The underlying issue is that this bill fails to thoroughly and carefully address accountability, transparency, and inclusivity principles. Article VI establishes subjective liability, meaning any individual that is damaged by an AI system and is wishing to receive compensation must specify the stakeholder and prove that there was a mistake in the machine's life cycle. Scholars emphasize that it is out of legal order to assign an individual responsible for proving algorithmic errors given the high degree of autonomy, unpredictability, and complexity of AI systems. This also drew attention to the currently occurring issues with face recognition systems in Brazil leading to unjust arrests by the police, which would then imply that when this bill is adopted, individuals would have to prove and justify these machine errors.
The main controversy of this draft bill was directed to three proposed principles. First, the non-discrimination principle, suggests that AI must be developed and used in a way that merely mitigates the possibility of abusive and discriminatory practices. Secondly, the pursuit of neutrality principle lists recommendations for stakeholders to mitigate biases; however, with no obligation to achieve this goal. Lastly, the transparency principle states that a system's transparency is only necessary when there is a high risk of violating fundamental rights. As easily observed, the Brazilian Legal Framework for Artificial Intelligence lacks binding and obligatory clauses and is rather filled with relaxed guidelines. In fact, experts emphasize that this bill may even make accountability for AI discriminatory biases even harder to achieve. Compared to the EU's proposal of extensive risk-based regulations, the Brazilian Bill has 10 articles proposing vague and generic recommendations.
Compared to the multistakeholder participation approach taken previously in the 2000s when drafting the Brazilian Internet Bill of Rights, Marco Civil da Internet, the Brazilian Bill is assessed to significantly lack perspective. Multistakeholderism, more commonly referred to as Multistakeholder Governance, is defined as the practice of bringing multiple stakeholders to participate in dialogue, decision-making, and implementation of responses to jointly perceived problems. In the context of regulatory AI, this multistakeholder perspective captures the trade-offs and varying perspectives of different stakeholders with specific interests, which helps maintain transparency and broader efficacy. On the contrary, the legislative proposal for AI regulation did not follow a similar multistakeholder approach.
Future steps may include, expanding upon the multistakeholder perspective. There has been a growing concern about the inapplicability of the framework of the bill, which highlights that the one-shoe-fits-all solution may not be suitable for the regulation of AI and calls for subjective and adaptive provisions.
= Canada =
The Pan-Canadian Artificial Intelligence Strategy (2017) is supported by federal funding of Can $125 million with the objectives of increasing the number of outstanding AI researchers and skilled graduates in Canada, establishing nodes of scientific excellence at the three major AI centres, developing 'global thought leadership' on the economic, ethical, policy and legal implications of AI advances and supporting a national research community working on AI. The Canada CIFAR AI Chairs Program is the cornerstone of the strategy. It benefits from funding of Can$86.5 million over five years to attract and retain world-renowned AI researchers. The federal government appointed an Advisory Council on AI in May 2019 with a focus on examining how to build on Canada's strengths to ensure that AI advancements reflect Canadian values, such as human rights, transparency and openness. The Advisory Council on AI has established a working group on extracting commercial value from Canadian-owned AI and data analytics. In 2020, the federal government and Government of Quebec announced the opening of the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, which will advance the cause of responsible development of AI. In June 2022, the government of Canada started a second phase of the Pan-Canadian Artificial Intelligence Strategy.{{Cite web |last=Innovation |first=Science and Economic Development Canada |date=2022-06-22 |title=Government of Canada launches second phase of the Pan-Canadian Artificial Intelligence Strategy |url=https://www.canada.ca/en/innovation-science-economic-development/news/2022/06/government-of-canada-launches-second-phase-of-the-pan-canadian-artificial-intelligence-strategy.html |access-date=2023-10-24 |website=www.canada.ca |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026020227/https://www.canada.ca/en/innovation-science-economic-development/news/2022/06/government-of-canada-launches-second-phase-of-the-pan-canadian-artificial-intelligence-strategy.html |url-status=live }} In November 2022, Canada has introduced the Digital Charter Implementation Act (Bill C-27), which proposes three acts that have been described as a holistic package of legislation for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA).{{Cite web |last=Canada |first=Government of |date=2022-08-18 |title=Bill C-27 summary: Digital Charter Implementation Act, 2022 |url=https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/bill-summary-digital-charter-implementation-act-2020 |access-date=2023-10-24 |website=ised-isde.canada.ca |archive-date=2023-12-20 |archive-url=https://web.archive.org/web/20231220102246/https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/bill-summary-digital-charter-implementation-act-2020 |url-status=live }}{{Cite web |title=Government Bill (House of Commons) C-27 (44–1) – First Reading – Digital Charter Implementation Act, 2022 – Parliament of Canada |url=https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading |access-date=2022-07-12 |website=www.parl.ca |language=en-ca}}
In September of 2023, the Canadian Government introduced a Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems. The code, based initially on public consultations, seeks to provide interim guidance to Canadian companies on responsible AI practices. Ultimately, its intended to serve as a stopgap until formal legislation, such as the Artificial Intelligence and Data Act (AIDA), is enacted.{{Cite web |date=2023-09-27 |title=Intelligence and Data Act |url=https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act |access-date=May 4, 2025 |website=Innovation, Science and Economic Development Canada}}{{Cite web |date=2024-12-16 |title=AI Watch: Global regulatory tracker – Canada |url=https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-canada |access-date=May 8, 2025 |website=Whitecase.com}} Moreover, in November 2024, the Canadian government additionally announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI) as part of a 2.4 billion CAD federal AI investment package. This includes 2 billion CAD to support a new AI Sovereign Computing Strategy and the AI Computing Access Fund, which aims to bolster Canada’s advanced computing infrastructure. Further funding includes 700 million CAD for domestic AI development, 1 billion CAD for public supercomputing infrastructure, and 300 million CAD to assist companies in accessing new AI resources.
= China =
{{Further|Artificial intelligence industry in China}}
The regulation of AI in China is mainly governed by the State Council of the People's Republic of China's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Chinese Communist Party and the State Council of the PRC urged the governing bodies of China to promote the development of AI up to 2030. Regulation of the issues of ethical and legal support for the development of AI is accelerating, and policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.{{Cite web |last=State Council China |title=New Generation of Artificial Intelligence Development Plan |url=https://www.unodc.org/ji/en/resdb/data/chn/2017/new_generation_of_artificial_intelligence_development_plan.html |archive-url=https://web.archive.org/web/20230607094245/https://www.unodc.org/ji/en/resdb/data/chn/2017/new_generation_of_artificial_intelligence_development_plan.html |archive-date=June 7, 2023 |access-date=2022-07-18 |website=www.unodc.org |language=en}}{{Cite journal |last=Department of International Cooperation Ministry of Science and Technology |date=September 2017 |title=Next Generation Artificial Intelligence Development Plan Issued by State Council |url=https://www.mfa.gov.cn/ce/cefi/eng/kxjs/P020171025789108009001.pdf |journal=China Science & Technology Newsletter |issue=17 |pages=2–12 |archive-url=https://web.archive.org/web/20220121145209/https://www.mfa.gov.cn/ce/cefi/eng/kxjs/P020171025789108009001.pdf |archive-date=January 21, 2022 |via=Ministry of Foreign Affairs of China}}{{Cite journal |last1=Wu |first1=Fei |last2=Lu |first2=Cewu |last3=Zhu |first3=Mingjie |last4=Chen |first4=Hao |last5=Zhu |first5=Jun |last6=Yu |first6=Kai |last7=Li |first7=Lei |last8=Li |first8=Ming |last9=Chen |first9=Qianfeng |last10=Li |first10=Xi |last11=Cao |first11=Xudong |date=2020 |title=Towards a new generation of artificial intelligence in China |url=https://www.nature.com/articles/s42256-020-0183-4 |journal=Nature Machine Intelligence |language=en |volume=2 |issue=6 |pages=312–316 |doi=10.1038/s42256-020-0183-4 |issn=2522-5839 |s2cid=220507829 |access-date=2022-07-18 |archive-date=2022-07-18 |archive-url=https://web.archive.org/web/20220718093205/https://www.nature.com/articles/s42256-020-0183-4 |url-status=live |url-access=subscription }} In 2021, China published ethical guidelines for the use of AI in China which state that researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety.{{Cite web |title=Ethical Norms for New Generation Artificial Intelligence Released |url=https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/ |access-date=2022-07-18 |website=Center for Security and Emerging Technology |language=en-US |archive-date=2023-02-10 |archive-url=https://web.archive.org/web/20230210114220/https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/ |url-status=live }} In 2023, China introduced Interim Measures for the Management of Generative AI Services.{{Cite web |title=China just gave the world a blueprint for reigning in generative A.I. |url=https://fortune.com/2023/07/14/china-ai-regulations-offer-blueprint/ |access-date=2023-07-24 |website=Fortune |language=en |archive-date=2023-07-24 |archive-url=https://web.archive.org/web/20230724074819/https://fortune.com/2023/07/14/china-ai-regulations-offer-blueprint/ |url-status=live }}
On August 15, 2023, China’s first Generative AI Measures officially came into force, becoming one of the first comprehensive national regulatory frameworks for generative AI. The measures apply to all providers offering generative AI services to the Chinese public, including foreign entities, ultimately setting the rules related to data protection, transparency, and algorithmic accountability.{{Cite web |date=August 2024 |title=Navigating the Complexities of AI Regulation in China |url=https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china |access-date=2025-05-08 |website=Reed Smith}} In parallel, earlier regulations such as the Chinese government's Deep Synthesis Provisions (effective January 2023) and the Algorithm Recommendation Provisions (effective March 2022) continue to shape China's governance of AI-driven systems, including requirements for watermarking and algorithm filing with the Cyberspace Administration of China (CAC).{{Cite web |last=Sheehan |first=Matt |date=2024-02-27 |title=Tracing the Roots of China’s AI Regulations |url=https://carnegieendowment.org/research/2024/02/tracing-the-roots-of-chinas-ai-regulations |access-date=2025-05-06 |website=Carnegie Endowment for International Peace}} Additionally, In October 2023, China also implemented a set of Ethics Review Measures for science and technology, mandating certain ethical assessments of AI projects which were deemed deemed socially sensitive or capable of negatively influencing public opinion. As of mid-2024, over 1,400 AI algorithms had been already registered under the CAC’s algorithm filing regime, which includes disclosure requirements and penalties for noncompliance. This layered approach reflects a broader policy process shaped by not only central directives but also academic input, civil society concerns, and public discourse.
= Council of Europe =
The Council of Europe (CoE) is an international organization that promotes human rights, democracy and the rule of law. It comprises 46 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence. The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe's aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions". The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies.{{cite web|title=Council of Europe and Artificial Intelligence|url=https://www.coe.int/en/web/artificial-intelligence/home|access-date=2021-07-29|website=Artificial Intelligence|language=en-GB|archive-date=2024-01-19|archive-url=https://web.archive.org/web/20240119094350/https://www.coe.int/en/web/artificial-intelligence/home|url-status=live}} The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.
In 2019, the Council of Europe initiated a process to assess the need for legally binding regulation of AI, focusing specifically on its implications for human rights and democratic values. Negotiations on a treaty began in September 2022, involving the 46 member states of the Council of Europe, as well as Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay, as well as the European Union. On 17 May 2024, the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" was adopted. It was opened for signature on 5 September 2024. Although developed by a European organisation, the treaty is open for accession by states from other parts of the world. The first ten signatories were: Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union.{{cite web |title=The Framework Convention on Artificial Intelligence |url=https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence |access-date=2024-09-05 |website=Council of Europe |language=en-GB |archive-date=2024-09-05 |archive-url=https://web.archive.org/web/20240905132640/https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence |url-status=live }}{{Cite web |date=5 September 2024 |title=Council of Europe opens first ever global treaty on AI for signature |url=https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature |access-date=2024-09-17 |website=Council of Europe |language=en-GB |archive-date=2024-09-17 |archive-url=https://web.archive.org/web/20240917001330/https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature |url-status=live }}
= European Union =
{{main|Artificial Intelligence Act}}
The EU is one of the largest jurisdictions in the world and plays an active role in the global regulation of digital technology through the GDPR,{{Cite web |last1=Peukert |first1=Christian |last2=Bechtold |first2=Stefan |last3=Kretschmer |first3=Tobias |last4=Batikas |first4=Michail |date=2020-09-30 |title=Regulatory export and spillovers: How GDPR affects global markets for data |url=https://cepr.org/voxeu/columns/regulatory-export-and-spillovers-how-gdpr-affects-global-markets-data |access-date=2023-10-26 |website=CEPR |language=en |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026020230/https://cepr.org/voxeu/columns/regulatory-export-and-spillovers-how-gdpr-affects-global-markets-data |url-status=live }} Digital Services Act, and the Digital Markets Act.{{Cite news |last=Coulter |first=Martin |date=2023-08-24 |title=Big Tech braces for EU Digital Services Act regulations |language=en |work=Reuters |url=https://www.reuters.com/technology/big-tech-braces-roll-out-eus-digital-services-act-2023-08-24/ |access-date=2023-10-26 |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026020227/https://www.reuters.com/technology/big-tech-braces-roll-out-eus-digital-services-act-2023-08-24/ |url-status=live }}{{Cite news |date=2023-08-28 |title=Europe's new role in digital regulation |language=en |work=Le Monde.fr |url=https://www.lemonde.fr/en/opinion/article/2023/08/28/europe-s-new-role-in-digital-regulation_6112363_23.html |access-date=2023-10-26 |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026020230/https://www.lemonde.fr/en/opinion/article/2023/08/28/europe-s-new-role-in-digital-regulation_6112363_23.html |url-status=live }} For AI in particular, the Artificial intelligence Act is regarded in 2023 as the most far-reaching regulation of AI worldwide.{{Cite news |last=Satariano |first=Adam |date=2023-06-14 |title=Europeans Take a Major Step Toward Regulating A.I. |language=en-US |work=The New York Times |url=https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html |access-date=2023-10-25 |issn=0362-4331 |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026020226/https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html |url-status=live }}{{Cite web |last=Browne |first=Ryan |date=2023-06-14 |title=EU lawmakers pass landmark artificial intelligence regulation |url=https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html |access-date=2023-10-25 |website=CNBC |language=en |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026020227/https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html |url-status=live }}
Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. The European Union is guided by a European Strategy on Artificial Intelligence,{{cite web|title=Communication Artificial Intelligence for Europe|url=https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe|last=Anonymous|date=2018-04-25|website=Shaping Europe's digital future – European Commission|language=en|access-date=2020-05-05|archive-date=2020-05-13|archive-url=https://web.archive.org/web/20200513001139/https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe|url-status=live}} supported by a High-Level Expert Group on Artificial Intelligence.{{cite web|title=High-Level Expert Group on Artificial Intelligence|url=https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence|last=smuhana|date=2018-06-14|website=Shaping Europe's digital future – European Commission|language=en|access-date=2020-05-05|archive-date=2019-10-24|archive-url=https://web.archive.org/web/20191024041927/https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence|url-status=live}}{{Cite journal|last1=Andraško|first1=Jozef|last2=Mesarčík|first2=Matúš|last3=Hamuľák|first3=Ondrej|date=2021-01-02|title=The regulatory intersections between artificial intelligence, data protection and cyber security: challenges and opportunities for the EU legal framework|url=http://dx.doi.org/10.1007/s00146-020-01125-5|journal=AI & Society|volume=36|issue=2|pages=623–636|doi=10.1007/s00146-020-01125-5|s2cid=230109912|issn=0951-5666|access-date=2021-03-27|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925043700/https://link.springer.com/article/10.1007/s00146-020-01125-5|url-status=live|url-access=subscription}} In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI),{{cite web |date=2019 |title=Ethics guidelines for trustworthy AI |url=https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai |website=European Commission |access-date=2022-05-30 |archive-date=2023-03-29 |archive-url=https://web.archive.org/web/20230329193431/https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai |url-status=live }} following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.{{cite web |last= |date=2019-06-26 |title=Policy and investment recommendations for trustworthy Artificial Intelligence |url=https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence |access-date=2020-05-05 |website=Shaping Europe's digital future – European Commission |language=en}} The EU Commission's High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020. the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.
On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence – A European approach to excellence and trust.{{cite web |date=19 February 2020 |title=White Paper on Artificial Intelligence – a European approach to excellence and trust |url=https://digital-strategy.ec.europa.eu/en/consultations/white-paper-artificial-intelligence-european-approach-excellence-and-trust |access-date=2021-06-07 |website=European Commission |archive-date=2024-01-05 |archive-url=https://web.archive.org/web/20240105090033/https://digital-strategy.ec.europa.eu/en/consultations/white-paper-artificial-intelligence-european-approach-excellence-and-trust |url-status=live }}{{cite journal|title=What's Ahead for a Cooperative Regulatory Agenda on Artificial Intelligence?|url=https://www.csis.org/analysis/whats-ahead-cooperative-regulatory-agenda-artificial-intelligence|access-date=2021-06-07|website=www.csis.org|date=17 March 2021 |language=en|archive-date=7 June 2021|archive-url=https://web.archive.org/web/20210607144154/https://www.csis.org/analysis/whats-ahead-cooperative-regulatory-agenda-artificial-intelligence|url-status=live|last1=Broadbent |first1=Meredith }} The White Paper consists of two main building blocks, an 'ecosystem of excellence' and a 'ecosystem of trust'. The 'ecosystem of trust' outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission distinguishes AI applications based on whether they are 'high-risk' or not. Only high-risk AI applications should be in the scope of a future EU regulatory framework. An AI application is considered high-risk if it operates in a risky sector (such as healthcare, transport or energy) and is "used in such a manner that significant risks are likely to arise". For high-risk AI applications, the requirements are mainly about the : "training data", "data and record-keeping", "information to be provided", "robustness and accuracy", and "human oversight". There are also requirements specific to certain usages such as remote biometric identification. AI applications that do not qualify as 'high-risk' could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.{{Cite book |last=European Commission. |title=White paper on artificial intelligence: a European approach to excellence and trust. |year=2020 |oclc=1141850140}}
A January 2021 draft was leaked online on April 14, 2021,Heikkilä, Melissa (2021-04-14). [https://www.politico.eu/newsletter/ai-decoded/politico-ai-decoded-transatlantic-schisms-finland-talks-to-machines-facebooks-fairness-project/ "POLITICO AI: Decoded: The EU's AI rules — Finland talks to machines — Facebook's fairness project"] (newsletter). POLITICO. Retrieved 2021-05-14. before the Commission presented their official "Proposal for a Regulation laying down harmonised rules on artificial intelligence" a week later.European Commission (2021-04-21). [https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682 Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20210514092131/https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682 |date=2021-05-14 }} (press release). Retrieved 2021-05-14. Shortly after, the Artificial Intelligence Act (also known as the AI Act) was formally proposed on this basis.{{cite web |last=Pery |first=Andrew |date=2021-10-06 |title=Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities |url=https://deepai.org/publication/trustworthy-artificial-intelligence-and-process-mining-challenges-and-opportunities |access-date=2022-02-27 |website=DeepAI |archive-date=2022-02-18 |archive-url=https://web.archive.org/web/20220218200006/https://deepai.org/publication/trustworthy-artificial-intelligence-and-process-mining-challenges-and-opportunities |url-status=live }} This proposal includes a refinement of the 2020 risk-based approach with, this time, 4 risk categories: "minimal", "limited", "high" and "unacceptable".{{Cite web |last=Browne |first=Ryan |date=2023-05-15 |title=Europe takes aim at ChatGPT with what might soon be the West's first A.I. law. Here's what it means |url=https://www.cnbc.com/2023/05/15/eu-ai-act-europe-takes-aim-at-chatgpt-with-landmark-regulation.html |access-date=2023-10-25 |website=CNBC |language=en}} The proposal has been severely critiqued in the public debate. Academics have expressed concerns about various unclear elements in the proposal – such as the broad definition of what constitutes AI – and feared unintended legal implications, especially for vulnerable groups such as patients and migrants.{{Cite journal |last1=Veale |first1=Michael |last2=Borgesius |first2=Frederik Zuiderveen |date=2021-08-01 |title=Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach |url=https://www.degruyter.com/document/doi/10.9785/cri-2021-220402/html |journal=Computer Law Review International |language=en |volume=22 |issue=4 |pages=97–112 |arxiv=2107.03721 |doi=10.9785/cri-2021-220402 |issn=2194-4164 |s2cid=235765823 |access-date=2023-01-12 |archive-date=2023-03-26 |archive-url=https://web.archive.org/web/20230326063315/https://www.degruyter.com/document/doi/10.9785/cri-2021-220402/html |url-status=live }}{{Cite journal |last=van Kolfschooten |first=Hannah |date=January 2022 |title=EU regulation of artificial intelligence: Challenges for patients' rights |url=https://dare.uva.nl/personal/pure/en/publications/eu-regulation-of-artificial-intelligence-challenges-for-patients-rights(7393eabd-82ef-4a92-9ea8-9d3c2a21eb1a).html |journal=Common Market Law Review |volume=59 |issue=1 |pages=81–112 |doi=10.54648/COLA2022005 |s2cid=248591427 |access-date=2023-12-10 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925043639/https://dare.uva.nl/search?identifier=7393eabd-82ef-4a92-9ea8-9d3c2a21eb1a |url-status=live |url-access=subscription }} The risk category "general-purpose AI" was added to the AI Act to account for versatile models like ChatGPT, which did not fit the application-based regulation framework.{{Cite news |last=Coulter |first=Martin |date=December 7, 2023 |title=What is the EU AI Act and when will regulation come into effect? |url=https://www.reuters.com/technology/what-are-eus-landmark-ai-rules-2023-12-06/ |work=Reuters |access-date=2024-06-01 |archive-date=2023-12-10 |archive-url=https://web.archive.org/web/20231210214020/https://www.reuters.com/technology/what-are-eus-landmark-ai-rules-2023-12-06/ |url-status=live }} Unlike for other risk categories, general-purpose AI models can be regulated based on their capabilities, not just their uses. Weaker general-purpose AI models are subject transparency requirements, while those considered to pose "systemic risks" (notably those trained using computational capabilities exceeding 1025 FLOPS) must also undergo a thorough evaluation process.{{Cite news |last=Bertuzzi |first=Luca |date=December 7, 2023 |title=AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement |url=https://www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-policymakers-nail-down-rules-on-ai-models-butt-heads-on-law-enforcement/ |work=euractiv |access-date=June 1, 2024 |archive-date=January 8, 2024 |archive-url=https://web.archive.org/web/20240108215257/https://www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-policymakers-nail-down-rules-on-ai-models-butt-heads-on-law-enforcement/ |url-status=live }} A subsequent version of the AI Act was finally adopted in May 2024.{{Cite web |last=Browne |first=Ryan |date=2024-05-21 |title=World's first major law for artificial intelligence gets final EU green light |url=https://www.cnbc.com/2024/05/21/worlds-first-major-law-for-artificial-intelligence-gets-final-eu-green-light.html |access-date=2024-06-01 |website=CNBC |language=en |archive-date=2024-05-21 |archive-url=https://web.archive.org/web/20240521235907/https://www.cnbc.com/2024/05/21/worlds-first-major-law-for-artificial-intelligence-gets-final-eu-green-light.html |url-status=live }} The AI Act will be progressively enforced.{{Cite web |date=2024-03-13 |title=Artificial Intelligence Act: MEPs adopt landmark law |url=https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law |access-date=2024-06-01 |website=European Parliament |language=en |archive-date=2024-03-15 |archive-url=https://web.archive.org/web/20240315034359/https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law |url-status=live }} Recognition of emotions and real-time remote biometric identification will be prohibited, with some exemptions, such as for law enforcement.{{cite news |date=11 December 2023 |title=Experts react: The EU made a deal on AI rules. But can regulators move at the speed of tech? |url=https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react/experts-react-the-eu-made-a-deal-on-ai-rules-but-can-regulators-move-at-the-speed-of-tech/ |work=Atlantic Council}}
The European Union's AI Act has created a regulatory framework with significant implications globally. This legislation introduces a risk-based approach to categorizing AI systems, focusing on high-risk applications like healthcare, education, and public safety.{{Cite web |date=2024-11-20 |title=European approach to artificial intelligence {{!}} Shaping Europe's digital future |url=https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence |access-date=2024-12-09 |website=digital-strategy.ec.europa.eu |language=en}} It requires organizations to ensure transparency, data governance, and human oversight in their AI solutions. While this aims to foster ethical AI use, the stringent requirements could increase compliance costs and delay technology deployment, impacting innovation-driven industries.{{citation needed|date=December 2024}}
Observers have expressed concerns about the multiplication of legislative proposals under the von der Leyen Commission. The speed of the legislative initiatives is partially led by political ambitions of the EU and could put at risk the digital rights of the European citizens, including rights to privacy,{{cite web |url=https://encompass-europe.com/comment/eus-digital-ambitions-beset-with-strategic-dissonance |title= EU's digital ambitions beset with strategic dissonance |last=Natale |first= Lara|date= February 2022 |website= Encompass|publisher= |access-date= 25 February 2022|quote=}} especially in the face of uncertain guarantees of data protection through cyber security. Among the stated guiding principles in the variety of legislative proposals in the area of AI under the von der Leyen Commission are the objectives of strategic autonomy{{cite news |last1= Bertuzzi |first1= Luca|last2= Killeen|first2= Molly|date= 17 September 2021|title=Digital Brief powered by Google: make it or break it, Chips Act, showing the path |url=https://www.euractiv.com/section/digital/news/digital-brief-powered-by-google-make-it-or-break-it-showing-the-path/ |work=Euractiv |location= |access-date=25 February 2022}} and the concept of digital sovereignty.{{cite web|url= https://www.atlanticcouncil.org/blogs/new-atlanticist/frances-new-mantra-liberty-equality-digital-sovereignty/|title= France's new mantra: liberty, equality, digital sovereignty|last= Propp|first= Kenneth|date= 7 February 2022|website= Atlantic Council|publisher= |access-date= 25 February 2022|quote= |archive-date= 25 February 2022|archive-url= https://web.archive.org/web/20220225165739/https://www.atlanticcouncil.org/blogs/new-atlanticist/frances-new-mantra-liberty-equality-digital-sovereignty/|url-status= live}} On May 29, 2024, the European Court of Auditors published a report stating that EU measures were not well coordinated with those of EU countries; that the monitoring of investments was not systematic; and that stronger governance was needed.{{cite web |url=https://www.eca.europa.eu/en/news/NEWS-SR-2024-08 |title=Artificial intelligence: EU must pick up the pace |date=29 May 2024 |access-date=29 May 2024 |website=European Court of Auditors |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925043512/https://www.eca.europa.eu/en/news/NEWS-SR-2024-08 |url-status=live }}
= Germany =
In November 2020,{{cite web |last1=Klimaschutz |first1=BMWK-Bundesministerium für Wirtschaft und |title="KI – Made in Germany" etablieren |url=https://www.bmwk.de/Redaktion/DE/Pressemitteilungen/2020/11/20201130-ki-made-in-germany-etablieren.html |website=www.bmwk.de |access-date=12 June 2023 |language=de |archive-date=12 June 2023 |archive-url=https://web.archive.org/web/20230612114711/https://www.bmwk.de/Redaktion/DE/Pressemitteilungen/2020/11/20201130-ki-made-in-germany-etablieren.html |url-status=dead }} DIN, DKE and the German Federal Ministry for Economic Affairs and Energy published the first edition of the "German Standardization Roadmap for Artificial Intelligence" (NRM KI) and presented it to the public at the Digital Summit of the Federal Government of Germany.{{cite news |title=DIN, DKE und BMWi veröffentlichen Normungsroadmap für Künstliche Intelligenz |url=https://www.all-electronics.de/markt/din-dke-und-bmwi-veroeffentlichen-normungsroadmap-fuer-kuenstliche-intelligenz.html |access-date=12 June 2023 |work=all-electronics |language=de}} NRM KI describes requirements to future regulations and standards in the context of AI. The implementation of the recommendations for action is intended to help to strengthen the German economy and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for this emerging technology. The first edition is a 200-page long document written by 300 experts. The second edition of the NRM KI was published to coincide with the German government's Digital Summit on December 9, 2022.{{cite journal |last1=Runze |first1=Gerhard |last2=Haimerl |first2=Martin |last3=Hauer |first3=Marc |last4=Holoyad |first4=Taras |last5=Obert |first5=Otto |last6=Pöhls |first6=Henrich |last7=Tagiew |first7=Rustam |last8=Ziehn |first8=Jens |date=2023 |title=Ein Werkzeug für eine gemeinsame KI-Terminologie – Das AI-Glossary als Weg aus Babylon |url=https://webreader.javaspektrum.de/de/profiles/4967c6d5eae1-javaspektrum/editions/javaspektrum-03-2023 |journal=Java Spektrum |language=de |issue=3 |pages=42–46 |access-date=2023-06-12 |archive-date=2024-04-27 |archive-url=https://web.archive.org/web/20240427181313/https://webreader.javaspektrum.de/de/profiles/4967c6d5eae1-javaspektrum/editions/javaspektrum-03-2023 |url-status=live }} DIN coordinated more than 570 participating experts from a wide range of fields from science, industry, civil society and the public sector. The second edition is a 450-page long document.
On the one hand, NRM KI covers the focus topics in terms of applications (e.g. medicine, mobility, energy & environment, financial services, industrial automation) and fundamental issues (e.g. AI classification, security, certifiability, socio-technical systems, ethics). On the other hand, it provides an overview of the central terms in the field of AI and its environment across a wide range of interest groups and information sources. In total, the document covers 116 standardisation needs and provides six central recommendations for action.{{cite web |title=Normungsroadmap Künstliche Intelligenz |url=https://www.dke.de/de/arbeitsfelder/core-safety/normungsroadmap-ki |website=www.dke.de |access-date=12 June 2023 |language=de}}
= G7 =
On 30 October 2023, members of the G7 subscribe to eleven guiding principles for the design, production and implementation of advanced artificial intelligence systems, as well as a voluntary Code of Conduct for artificial intelligence developers in the context of the Hiroshima Process.{{Cite web |date=2023-10-30 |title=Hiroshima Process International Guiding Principles for Advanced AI system {{!}} Shaping Europe's digital future |url=https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system |access-date=2023-11-01 |website=digital-strategy.ec.europa.eu |language=en |archive-date=2023-11-01 |archive-url=https://web.archive.org/web/20231101170659/https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system |url-status=live }}
The agreement receives the applause of Ursula von der Leyen who finds in it the principles of the AI Directive, currently being finalized.
New guidelines also aim to establish a coordinated global effort towards the responsible development and use of advanced AI systems. While non-binding, the G7 governments encourage organizations to voluntarily adopt the guidelines, which emphasize a risk-based approach across the AI lifecycle—from pre-deployment risk assessment to post-deployment incident reporting and mitigation.{{Cite web |date=January 19, 2024 |title=G7 AI Principles and Code of Conduct |url=https://www.ey.com/en_gl/insights/ai/g7-ai-principles-and-code-of-conduct |access-date=May 7, 2025 |website=Ernst & Young}}
The AIP&CoC also highlight the importance of AI system security, internal adversarial testing ('red teaming'), public transparency about capabilities and limitations, and governance procedures that include privacy safeguards and content authentication tools. The guidelines additionally promote AI innovation directed at solving global challenges such as climate change and public health, and call for advancing international technical standards.
Looking ahead, the G7 intends to further refine their principles and Code of Conduct in collaboration with other organizations like the OECD, GPAI, and broader stakeholders. Areas of broader development include more clrsnrt AI terminology (e.g., “advanced AI systems”), the setting of risk benchmarks, and mechanisms for cross-border information sharing on potential AI risks. Despite general alignment on AI safety, analysts have noted that differing regulatory philosophies—such as the EU’s prescriptive AI Act versus the U.S.’s sector-specific approach—may challenge global regulatory harmonization.{{Cite web |last=Schildkraut |first=Peter J. |date=January 19, 2024 |title=What the G7 Code of Conduct Means for Global AI Compliance Programs |url=https://www.arnoldporter.com/en/perspectives/publications/2024/01/what-the-g7-code-of-conduct-means-for-global-ai-compliance {{!}}website=Arnold & Porter |access-date=May 8, 2025}}
= Israel =
On October 30, 2022, pursuant to government resolution 212 of August 2021, the Israeli Ministry of Innovation, Science and Technology released its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation.{{Cite web |last=Cahane |first=Amir |date=November 13, 2022 |title=Israeli AI regulation and policy white paper: a first glance |url=https://blog.ai-laws.org/israeli-ai-regulation-and-policy-white-paper-a-first-glance/ |website=RAILS Blog}} By December 2023, the Ministry of Innovation and the Ministry of Justice published a joint AI regulation and ethics policy paper, outlining several AI ethical principles and a set of recommendations including opting for sector-based regulation, a risk-based approach, preference for "soft" regulatory tools and maintaining consistency with existing global regulatory approaches to AI.{{Cite web |last=Ministry of Innovation, Science and Technology and the Ministry of Justice |date=December 12, 2023 |title=Israel's Policy on Artificial Intelligence Regulation and Ethics |url=https://www.gov.il/en/pages/ai_2023}}
In December of 2023, Israel unveiled its first comprehensive national AI policy which was jointly developed through a collaboration between ministerial and stakeholder consultation. In general, the new policy outlines ethical principles aligned with current OECD guidelines and recommends a sector-based, risk driven regulatory framework which focuses on areas like transparency accountability.{{Cite web |date=December 17, 2023 |title=Artificial Intelligence Regulation and Ethics Policy |url=https://www.gov.il/en/pages/ai_2023 |access-date=2025-05-07 |website=gov.il}} It the policy, it proposes the creation of a national AI Policy Coordination Center to support regulators, and furtehr develop the tools necessary for responsible AI deployment. In addition, alongside 56 other nations, to domestic policy development, Israel signed the world’s first binding international treaty on artificial intelligence in March of 2024. The specific treaty, led by the Council of Europe, has obliged signatories to ensure current AI systems uphold democratic values, human rights, and the rule of law.{{Cite web |last=Wroble |first=Sharon |date=2024-05-03 |title=Israel Signs Global Treaty to Address Risks of Artificial Intelligence |url=https://www.timesofisrael.com/israel-signs-global-treaty-to-address-risks-of-artificial-intelligence/ |access-date=2025-05-08 |website=Times of Israel}}
= Italy =
In October 2023, the Italian privacy authority approved a regulation that provides three principles for therapeutic decisions taken by automated systems: transparency of decision-making processes, human supervision of automated decisions and algorithmic non-discrimination.{{cite web|url=https://amp24-ilsole24ore-com.translate.goog/pagina/AFfTkfCB?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=it&_x_tr_pto=wapp|title=Cures and artificial intelligence: privacy and the risk of the algorithm that discriminates|author=Marzio Bartoloni|date=11 October 2023}}
In March 2024, the President of the Italian Data Protection Authority reaffirmed their agency’s readiness to implement the European Union’s newly introduced Artificial Intelligence Act, praising the framework of institutional competence and independence.{{Cite web |date=2024-12-16 |title=AI Watch: Global regulatory tracker – Italy |url=https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-italy |access-date=2025-05-09 |website=whitecase.com}} Italy has continued to develop guidance on AI applications through existing legal frameworks, including recent innovations in areas such as facial recognition for law enforcement, AI in healthcare, deepfakes, and smart assistants.{{Cite web |last=Olivi, Bocchi, Cirotti |date=2024-05-07 |title=The road to the AI Act: The Italian approach – Part 3: The Italian national competent AI Authority |url=https://www.dentons.com/en/insights/articles/2024/may/7/the-road-to-the-ai-act-the-italian-approach-pt-3 {{!}}website=Dentons |access-date=2025-05-09 |website=Dentons}} The Italian government’s National AI Strategy (2022–2024) emphasizes responsible innovation and outlines goals for talent development, public and private sector adoption, and regulatory clarity, particularly in coordination with EU-level initiatives. While Italy has not enacted standalone AI legislation, courts and regulators have begun interpreting existing laws to address transparency, non-discrimination, and human oversight in algorithmic decision-making.
= Morocco =
In Morocco, a new legislative proposal has been put forward by a coalition of political parties in Parliament to establish the National Agency for Artificial Intelligence (AI). This agency is intended to regulate AI technologies, enhance collaboration with international entities in the field, and increase public awareness of both the possibilities and risks associated with AI.{{Cite news |last=The Moroccan Times |title=Morocco Proposes Legislation for National AI Agency |url=https://themoroccantimes.com/2024/04/27375/morocco-proposes-legislation-for-national-ai-agency |access-date=2024-04-25 |website=The Moroccan Times |date=2024-04-24 |archive-date=2024-04-25 |archive-url=https://web.archive.org/web/20240425114046/https://themoroccantimes.com/2024/04/27375/morocco-proposes-legislation-for-national-ai-agency |url-status=live }}
In recent years, Morocco has made efforts to advance its use of artificial intelligence in the legal sector, particularly through AI tools that assist with judicial prediction and document analysis, helping to streamline case law research and support legal practitioners with more complex tasks. Alongside these efforts to establish a national AI agency, AI is being gradually introduced into legislative and judicial processes in Morocco, with ongoing discussions emphasizing the benefits as well as the potential risks of these technologies.{{Cite web |last=Buza, Taha |first=Maria, Sherif |date=2025-04-09 |title=DPA Digital Digest: Morocco [2025 Edition] |url=https://digitalpolicyalert.org/digest/dpa-digital-digest-morocco |access-date=May 8, 2025 |website=Digital Policy Alert}}
Generally speaking Morocco's broader digital policy includes robust data governance measures including the 2009 Personal Data Protection Law and the 2020 Cybersecurity Law, which establish requirements in areas such as privacy, breach notification, and data localization. As of 2024, additional decrees have also expanded cybersecurity standards for cloud infrastructure and data audits within the nation. And while general data localization is not mandated, sensitive government and critical infrastructure data must be stored domestically. Oversight is led by the National Commission for the Protection of Personal Data (CNDP) and the General Directorate of Information Systems Security (DGSSI), though public enforcement actions in the country remain limited.
= New Zealand =
{{As of|2023|July}}, no AI-specific legislation exists, but AI usage is regulated by existing laws, including the Privacy Act, the Human Rights Act, the Fair Trading Act and the Harmful Digital Communications Act.{{Cite web |last=Rebecca |date=2023-07-13 |title=Why is regulating AI such a challenge? |url=https://www.pmcsa.ac.nz/2023/07/13/why-is-regulating-ai-such-a-challenge/ |access-date=2024-08-20 |website=Prime Minister's Chief Science Advisor |language=en-NZ |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925043549/https://www.pmcsa.ac.nz/2023/07/13/why-is-regulating-ai-such-a-challenge/ |url-status=live }}
In 2020, the New Zealand Government sponsored a World Economic Forum pilot project titled "Reimagining Regulation for the Age of AI", aimed at creating regulatory frameworks around AI.{{Cite web |date=2020-06-29 |title=Reimagining Regulation for the Age of AI: New Zealand Pilot Project |url=https://www.weforum.org/publications/reimagining-regulation-for-the-age-of-ai-new-zealand-pilot-project/ |website=World Economic Forum}} The same year, the Privacy Act was updated to regulate the use of New Zealanders' personal information in AI.{{Cite web |last=Cann |first=Geraden |date=2023-05-25 |title=Privacy Commission issues warning to companies and organisations using AI |url=https://www.stuff.co.nz/business/132145529/privacy-commission-issues-warning-to-companies-and-organisations-using-ai |access-date=2024-08-20 |website=Stuff |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925043513/https://www.stuff.co.nz/business/132145529/privacy-commission-issues-warning-to-companies-and-organisations-using-ai |url-status=live }} In 2023, the Privacy Commissioner released guidance on using AI in accordance with information privacy principles.{{Cite web |date=2023-09-21 |title=Artificial Intelligence and the IPPs |url=https://www.privacy.org.nz/publications/guidance-resources/ai/ |access-date=2024-08-20 |website=www.privacy.org.nz |archive-date=2024-08-20 |archive-url=https://web.archive.org/web/20240820003543/https://www.privacy.org.nz/publications/guidance-resources/ai/ |url-status=live }} In February 2024, the Attorney-General and Technology Minister announced the formation of a Parliamentary cross-party AI caucus, and that framework for the Government's use of AI was being developed. She also announced that no extra regulation was planned at that stage.{{Cite web |date=2024-02-21 |title=Survey finds most Kiwis spooked about malicious AI - minister responds |url=https://www.nzherald.co.nz/business/survey-finds-most-kiwis-worried-about-malicious-ai-technology-minister-judith-collins-responds/DJWKCXXSF5CCHPPNE47BTCQDJU/ |access-date=2024-08-20 |website=The New Zealand Herald |language=en-NZ |archive-date=2024-08-20 |archive-url=https://web.archive.org/web/20240820003543/https://www.nzherald.co.nz/business/survey-finds-most-kiwis-worried-about-malicious-ai-technology-minister-judith-collins-responds/DJWKCXXSF5CCHPPNE47BTCQDJU/ |url-status=live }}
=Philippines=
In 2023, a bill was filed in the Philippine House of Representatives which proposed the establishment of the Artificial Intelligence Development Authority (AIDA) which would oversee the development and research of artificial intelligence. AIDA was also proposed to be a watchdog against crimes using AI.{{cite news |last1=Arasa |first1=Dale |title=Philippine AI Bill Proposes Agency for Artificial Intelligence |url=https://technology.inquirer.net/122156/philippine-ai-bill-proposes-agency-for-artificial-intelligence |access-date=29 May 2024 |newspaper=Philippine Daily Inquirer |date=13 March 2023 |language=en |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925044522/https://technology.inquirer.net/122156/philippine-ai-bill-proposes-agency-for-artificial-intelligence |url-status=live }}
The Commission on Elections has also considered in 2024 the ban of using AI and deepfake for campaigning. They look to implement regulations that would apply as early as for the 2025 general elections.{{cite news |last1=Abarca |first1=Charie |title=Comelec wants AI ban on campaign materials ahead of 2025 polls |url=https://newsinfo.inquirer.net/1946107/comelec-mulls-the-use-of-artificial-intelligence-for-2025-poll-campaign |access-date=29 May 2024 |newspaper=Philippine Daily Inquirer |date=29 May 2024 |language=en |archive-date=29 May 2024 |archive-url=https://web.archive.org/web/20240529064712/https://newsinfo.inquirer.net/1946107/comelec-mulls-the-use-of-artificial-intelligence-for-2025-poll-campaign |url-status=live }}
= Spain =
In 2018, the Spanish Ministry of Science, Innovation and Universities approved an R&D Strategy on Artificial Intelligence.{{Cite web |last=Ministry of Science of Spain |date=2018 |title=Spanish RDI Strategy in Artificial Intelligence |url=https://knowledge4policy.ec.europa.eu/sites/default/files/Spanish_RDI_strategy_in_AI.pdf |access-date=9 December 2023 |website=www.knowledge4policy.ec.europa.eu |archive-date=18 July 2023 |archive-url=https://web.archive.org/web/20230718150236/https://knowledge4policy.ec.europa.eu/sites/default/files/Spanish_RDI_strategy_in_AI.pdf |url-status=live }}{{Excerpt|Spanish Agency for the Supervision of Artificial Intelligence|History}}
= Switzerland =
Switzerland currently has no specific AI legislation, but on 12 February 2025, the Federal Council announced plans to ratify the Council of Europe’s AI Convention and incorporate it into Swiss law. A draft bill and implementation plan are to be prepared by the end of 2026. The approach includes sector-specific regulation, limited cross-sector rules, such as data protection, and non-binding measures such as industry agreements. The goals are to support innovation, protect fundamental rights, and build public trust in AI.{{Cite web |date=12 February 2025 |title=Artificial Intelligence: Overview and Switzerland's regulatory approach |url=https://www.bakom.admin.ch/bakom/en/homepage/digital-switzerland-and-internet/strategie-digitale-schweiz/ai.html |access-date=31 March 2025 |website=Swiss Federal Office of Communications (OFCOM)}}
= United Kingdom =
The UK supported the application and development of AI in business via the Digital Economy Strategy 2015–2018 introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy.{{Cite web |title=Digital economy strategy 2015 to 2018 |url=https://www.ukri.org/publications/digital-economy-strategy-2015-to-2018 |access-date=2022-07-18 |website=www.ukri.org |date=16 February 2015 |archive-date=2022-09-01 |archive-url=https://web.archive.org/web/20220901043542/https://www.ukri.org/publications/digital-economy-strategy-2015-to-2018/ |url-status=live }} In the public sector, the Department for Digital, Culture, Media and Sport advised on data ethics and the Alan Turing Institute provided guidance on responsible design and implementation of AI systems.{{Cite web |title=Data ethics and AI guidance landscape |url=https://www.gov.uk/guidance/data-ethics-and-ai-guidance-landscape |access-date=2023-10-26 |website=GOV.UK |language=en |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026232450/https://www.gov.uk/guidance/data-ethics-and-ai-guidance-landscape |url-status=live }}{{Cite journal|journal=Zenodo|last=Leslie|first=David|s2cid=189762499|date=2019-06-11|title=Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector|url=https://zenodo.org/record/3240529|doi=10.5281/zenodo.3240529|arxiv=1906.05684|access-date=2020-04-28|archive-date=2020-04-16|archive-url=https://web.archive.org/web/20200416032404/https://zenodo.org/record/3240529|url-status=live}} In terms of cyber security, in 2020 the National Cyber Security Centre has issued guidance on 'Intelligent Security Tools'.{{cite web|title=Intelligent security tools|url=https://www.ncsc.gov.uk/collection/intelligent-security-tools|access-date=2020-04-28|website=www.ncsc.gov.uk|language=en|archive-date=2020-04-06|archive-url=https://web.archive.org/web/20200406161423/https://www.ncsc.gov.uk/collection/intelligent-security-tools|url-status=live}} The following year, the UK published its 10-year National AI Strategy,{{Cite news|url=https://www.theregister.com/2021/09/22/uk_10_year_national_ai_strategy/|title=UK publishes National Artificial Intelligence Strategy|first=Tim|last=Richardson|website=www.theregister.com|access-date=2022-01-01|archive-date=2023-02-10|archive-url=https://web.archive.org/web/20230210114137/https://www.theregister.com/2021/09/22/uk_10_year_national_ai_strategy/|url-status=live}} which describes actions to assess long-term AI risks, including AGI-related catastrophic risks.[https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version The National AI Strategy of the UK] {{Webarchive|url=https://web.archive.org/web/20230210114139/https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version |date=2023-02-10 }}, 2021 (actions 9 and 10 of the section "Pillar 3 – Governing AI Effectively")
In March 2023, the UK released the white paper A pro-innovation approach to AI regulation.{{Cite web |title=A pro-innovation approach to AI regulation |url=https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper |access-date=2023-10-27 |website=GOV.UK |language=en |archive-date=2023-10-27 |archive-url=https://web.archive.org/web/20231027100934/https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper |url-status=live }} This white paper presents general AI principles, but leaves significant flexibility to existing regulators in how they adapt these principles to specific areas such as transport or financial markets.{{Cite web |last=Gikay |first=Asress Adimi |date=2023-06-08 |title=How the UK is getting AI regulation right |url=http://theconversation.com/how-the-uk-is-getting-ai-regulation-right-206701 |access-date=2023-10-27 |website=The Conversation |language=en-US |archive-date=2023-10-27 |archive-url=https://web.archive.org/web/20231027215848/http://theconversation.com/how-the-uk-is-getting-ai-regulation-right-206701 |url-status=live }} In November 2023, the UK hosted the first AI safety summit, with the prime minister Rishi Sunak aiming to position the UK as a leader in AI safety regulation.{{Cite web |last=Browne |first=Ryan |date=2023-06-12 |title=British Prime Minister Rishi Sunak pitches UK as home of A.I. safety regulation as London bids to be next Silicon Valley |url=https://www.cnbc.com/2023/06/12/pm-rishi-sunak-pitches-uk-as-geographical-home-of-ai-regulation.html |access-date=2023-10-27 |website=CNBC |language=en |archive-date=2023-07-27 |archive-url=https://web.archive.org/web/20230727075628/https://www.cnbc.com/2023/06/12/pm-rishi-sunak-pitches-uk-as-geographical-home-of-ai-regulation.html |url-status=live }}{{Cite web |title=AI Safety Summit: introduction (HTML) |url=https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-html |access-date=2023-10-27 |website=GOV.UK |language=en |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026221436/https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-html |url-status=live }} During the summit, the UK created an AI Safety Institute, as an evolution of the Frontier AI Taskforce led by Ian Hogarth. The institute was notably assigned the responsibility of advancing the safety evaluations of the world's most advanced AI models, also called frontier AI models.{{Cite web |title=Introducing the AI Safety Institute |url=https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute |access-date=2024-07-08 |website=GOV.UK |language=en |archive-date=2024-07-07 |archive-url=https://web.archive.org/web/20240707003627/https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute |url-status=live }}
The UK government indicated its reluctance to legislate early, arguing that it may reduce the sector's growth and that laws might be rendered obselete by further technological progress.{{Cite magazine |last=Henshall |first=Will |date=2024-04-01 |title=U.S., U.K. Will Partner to Safety Test AI |url=https://time.com/6962503/ai-artificial-intelligence-uk-us-safety/ |access-date=2024-07-08 |magazine=TIME |language=en |archive-date=2024-07-07 |archive-url=https://web.archive.org/web/20240707003627/https://time.com/6962503/ai-artificial-intelligence-uk-us-safety/ |url-status=live }}
= United States =
File:Number_of_AI-related_regulations_in_the_US_-_2024_AI_index.jpg
Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.{{Cite journal|last=Weaver|first=John Frank|date=2018-12-28|title=Regulation of artificial intelligence in the United States|url=https://www.elgaronline.com/view/edcoll/9781786439048/9781786439048.00018.xml|journal=Research Handbook on the Law of Artificial Intelligence|pages=155–212|language=en-US|doi=10.4337/9781786439055.00018|isbn=9781786439055|access-date=2020-06-29|archive-date=2020-06-30|archive-url=https://web.archive.org/web/20200630101749/https://www.elgaronline.com/view/edcoll/9781786439048/9781786439048.00018.xml|url-status=live|url-access=subscription}}
== 2016–2017 ==
As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence,{{Cite web |date=2016-10-12 |title=The Administration's Report on the Future of Artificial Intelligence |url=https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence |access-date=2023-11-01 |website=White House |language=en |archive-date=2023-10-04 |archive-url=https://web.archive.org/web/20231004160345/https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence |url-status=live }} the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....".{{cite web|last=National Science and Technology Council Committee on Technology|date=October 2016|work=whitehouse.gov|title=Preparing for the Future of Artificial Intelligence|via=National Archives|url=https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence}} These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.
== 2018–2019 ==
The first main report was the National Strategic Research and Development Plan for Artificial Intelligence.{{Cite journal |date=October 2016 |title=National Strategic Research and Development Plan for Artificial Intelligence |url=https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/national_ai_rd_strategic_plan.pdf |journal=National Science and Technology Council}} On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."{{cite web |title=About |url=https://www.nscai.gov/about/ |access-date=2020-06-29 |website=National Security Commission on Artificial Intelligence |language=en-US}} Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.{{cite web|url=https://www.congress.gov/bill/115th-congress/house-bill/5356|title=H.R.5356 – 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018|last=Stefanik|first=Elise M.|date=2018-05-22|website=www.congress.gov|access-date=2020-03-13|archive-date=2020-03-23|archive-url=https://web.archive.org/web/20200323111045/https://www.congress.gov/bill/115th-congress/house-bill/5356|url-status=live}} The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.{{cite web|url=https://www.congress.gov/bill/116th-congress/senate-bill/1558/text|title=Text – S.1558 – 116th Congress (2019–2020): Artificial Intelligence Initiative Act|last=Heinrich|first=Martin|date=2019-05-21|website=www.congress.gov|access-date=2020-03-29|archive-date=2020-03-29|archive-url=https://web.archive.org/web/20200329200403/https://www.congress.gov/bill/116th-congress/senate-bill/1558/text|url-status=live}}{{Cite journal|last=Scherer|first=Matthew U.|date=2015|title=Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies|journal=SSRN Working Paper Series|doi=10.2139/ssrn.2609777|issn=1556-5068}}
On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence,{{Cite web |title=Executive Order on Maintaining American Leadership in Artificial Intelligence – The White House |url=https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ |access-date=2023-11-01 |website=trumpwhitehouse.archives.gov |archive-date=2021-01-20 |archive-url=https://web.archive.org/web/20210120202244/https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ |url-status=live }} the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications,{{Cite web |last=Vought |first=Russell T. |title=MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES – Guidance for Regulation of Artificial Intelligence Applications |url=https://bidenwhitehouse.archives.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf |website=The White House |access-date=2020-03-28 |archive-date=2020-03-18 |archive-url=https://web.archive.org/web/20200318001101/https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf |url-status=live }} which includes ten principles for United States agencies when deciding whether and how to regulate AI.{{cite web|url=https://www.insidetechmedia.com/2020/01/14/ai-update-white-house-issues-10-principles-for-artificial-intelligence-regulation/|title=AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation|date=2020-01-14|website=Inside Tech Media|language=en-US|access-date=2020-03-25|archive-date=2020-03-25|archive-url=https://web.archive.org/web/20200325190748/https://www.insidetechmedia.com/2020/01/14/ai-update-white-house-issues-10-principles-for-artificial-intelligence-regulation/|url-status=live}} In response, the National Institute of Standards and Technology has released a position paper,{{Cite book|url=https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf|title=U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools|publisher=National Institute of Science and Technology|year=2019|access-date=2020-03-28|archive-date=2020-03-25|archive-url=https://web.archive.org/web/20200325190745/https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf|url-status=live}} and the Defense Innovation Board has issued recommendations on the ethical use of AI. A year later, the administration called for comments on regulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.{{cite web|date=2020-01-13|title=Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, "Guidance for Regulation of Artificial Intelligence Applications"|url=https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|access-date=2020-11-28|website=Federal Register|archive-date=2020-11-25|archive-url=https://web.archive.org/web/20201125060218/https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|url-status=live}}
Other specific agencies working on the regulation of AI include the Food and Drug Administration, which has created pathways to regulate the incorporation of AI in medical imaging. National Science and Technology Council also published the National Artificial Intelligence Research and Development Strategic Plan,{{Cite web |last=National Science Technology Council |date=June 21, 2019 |title=The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update |url=https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf |access-date=August 8, 2023 |archive-date=August 25, 2023 |archive-url=https://web.archive.org/web/20230825215758/https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf |url-status=live }} which received public scrutiny and recommendations to further improve it towards enabling Trustworthy AI.{{Cite journal |last1=Gursoy |first1=Furkan |last2=Kakadiaris |first2=Ioannis A. |date=2023 |title=Artificial intelligence research strategy of the United States: critical assessment and policy recommendations |journal=Frontiers in Big Data |volume=6 |doi=10.3389/fdata.2023.1206139 |issn=2624-909X |doi-access=free |pmid=37609602 |pmc=10440374 }}
== 2021–2022 ==
In March 2021, the National Security Commission on Artificial Intelligence released their final report.{{Cite book|url=https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf|title=NSCAI Final Report|publisher=The National Security Commission on Artificial Intelligence|year=2021|location=Washington, DC|access-date=2021-05-27|archive-date=2023-02-15|archive-url=https://web.archive.org/web/20230215110858/https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf|url-status=live}} In the report, they stated that "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values."
In June 2022, Senators Rob Portman and Gary Peters introduced the Global Catastrophic Risk Mitigation Act. The bipartisan bill "would also help counter the risk of artificial intelligence... from being abused in ways that may pose a catastrophic risk".{{cite web |author=Homeland Newswire |date=2022-06-25 |title=Portman, Peters Introduce Bipartisan Bill to Ensure Federal Government is Prepared for Catastrophic Risks to National Security |url=https://homelandnewswire.com/stories/627890045-portman-peters-introduce-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-to-national-security |url-status=dead |archive-url=https://web.archive.org/web/20220625220611/https://homelandnewswire.com/stories/627890045-portman-peters-introduce-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-to-national-security |archive-date=June 25, 2022 |accessdate=2022-07-04 |publisher=HomelandNewswire}}{{cite web |url=https://www.congress.gov/bill/117th-congress/senate-bill/4488/text |title=Text – S.4488 – 117th Congress (2021–2022): A bill to establish an interagency committee on global catastrophic risk, and for other purposes |website=Congress.gov |publisher=Library of Congress |date=2022-06-23 |accessdate=2022-07-04 |archive-date=2022-07-05 |archive-url=https://web.archive.org/web/20220705190843/https://www.congress.gov/bill/117th-congress/senate-bill/4488/text |url-status=live }} On October 4, 2022, President Joe Biden unveiled a new AI Bill of Rights,{{Cite web |title=Blueprint for an AI Bill of Rights {{!}} OSTP |url=https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/ |access-date=2023-11-01 |website=The White House |language=en-US |archive-date=2023-10-31 |archive-url=https://web.archive.org/web/20231031233909/https://www.whitehouse.gov/ostp/ai-bill-of-rights/ |url-status=live }} which outlines five protections Americans should have in the AI age: 1. Safe and Effective Systems, 2. Algorithmic Discrimination Protection, 3.Data Privacy, 4. Notice and Explanation, and 5. Human Alternatives, Consideration, and Fallback. The Bill was introduced in October 2021 by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology.{{Cite web |title=The White House just unveiled a new AI Bill of Rights |url=https://www.technologyreview.com/2022/10/04/1060600/white-house-ai-bill-of-rights/ |access-date=2023-10-24 |website=MIT Technology Review |language=en |archive-date=2023-10-26 |archive-url=https://web.archive.org/web/20231026020227/https://www.technologyreview.com/2022/10/04/1060600/white-house-ai-bill-of-rights/ |url-status=live }}
== 2023 ==
In January 2023, the New York City Bias Audit Law (Local Law 144{{Cite web |title=A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools |url=https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9 |access-date=2023-11-01 |website=The New York City Council |archive-date=2023-10-18 |archive-url=https://web.archive.org/web/20231018165639/https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9 |url-status=live }}) was enacted by the NYC Council in November 2021. Originally due to come into effect on 1 January 2023, the enforcement date for Local Law 144 has been pushed back due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection's (DCWP) proposed rules to clarify the requirements of the legislation. It eventually became effective on July 5, 2023.{{Cite web |last=Kestenbaum |first=Jonathan |date=July 5, 2023 |title=NYC's New AI Bias Law Broadly Impacts Hiring and Requires Audits |url=https://news.bloomberglaw.com/us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits |access-date=2023-10-24 |website=Bloomberg Law |language=en |archive-date=2023-11-01 |archive-url=https://web.archive.org/web/20231101003028/https://news.bloomberglaw.com/us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits |url-status=live }} From this date, the companies that are operating and hiring in New York City are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias.
File:President Joe Biden holds a bipartisan meeting on Artificial Intelligence (AI) with Senators Chuck Schumer (D-NY), Martin Heinrich (D-MN), Mike Rounds (R-SD) and Todd Young (R-IN), Tuesday, October 31, 2023, in the Oval Office.jpgIn July 2023, the Biden–Harris Administration secured voluntary commitments from seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to manage the risks associated with AI. The companies committed to ensure AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems' capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention to climate change mitigation. In September 2023, eight additional companies – Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI – subscribed to these voluntary commitments.{{Cite web |last=House |first=The White |date=2023-07-21 |title=FACT SHEET: Biden–Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI |url=https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ |access-date=2023-09-25 |website=The White House |language=en-US |archive-date=2023-09-25 |archive-url=https://web.archive.org/web/20230925180800/https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ |url-status=live }}{{Cite web |last=House |first=The White |date=2023-09-12 |title=FACT SHEET: Biden–Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI |url=https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ |access-date=2023-09-25 |website=The White House |language=en-US |archive-date=2023-09-24 |archive-url=https://web.archive.org/web/20230924052254/https://www.whitehouse.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ |url-status=live }}
The Biden administration, in October 2023 signaled that they would release an executive order leveraging the federal government's purchasing power to shape AI regulations, hinting at a proactive governmental stance in regulating AI technologies.{{Cite web |last=Chatterjee |first=Mohar |date=2023-10-12 |title=White House AI order to flex federal buying power |url=https://www.politico.com/news/2023/10/12/biden-government-standards-ai-00121284 |access-date=2023-10-27 |website=POLITICO |language=en}} On October 30, 2023, President Biden released this Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order addresses a variety of issues, such as focusing on standards for critical infrastructure, AI-enhanced cybersecurity, and federally funded biological synthesis projects.{{Cite web |last=House |first=The White |date=2023-10-30 |title=FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence |url=https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ |access-date=2023-12-05 |website=The White House |language=en-US |archive-date=2024-01-30 |archive-url=https://web.archive.org/web/20240130131421/https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ |url-status=live }}
The Executive Order provides the authority to various agencies and departments of the US government, including the Energy and Defense departments, to apply existing consumer protection laws to AI development.{{Cite web |last1=Lewis |first1=James Andrew |last2=Benson |first2=Emily |last3=Frank |first3=Michael |date=2023-10-31 |title=The Biden Administration's Executive Order on Artificial Intelligence |url=https://www.csis.org/analysis/biden-administrations-executive-order-artificial-intelligence |language=en |access-date=2023-12-05 |archive-date=2023-12-05 |archive-url=https://web.archive.org/web/20231205192124/https://www.csis.org/analysis/biden-administrations-executive-order-artificial-intelligence |url-status=live }}
The Executive Order builds on the Administration's earlier agreements with AI companies to instate new initiatives to "red-team" or stress-test AI dual-use foundation models, especially those that have the potential to pose security risks, with data and results shared with the federal government.
The Executive Order also recognizes AI's social challenges, and calls for companies building AI dual-use foundation models to be wary of these societal problems. For example, the Executive Order states that AI should not "worsen job quality", and should not "cause labor-force disruptions". Additionally, Biden's Executive Order mandates that AI must "advance equity and civil rights", and cannot disadvantage marginalized groups.{{Cite web |last=House |first=The White |date=2023-10-30 |title=Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence |url=https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ |access-date=2023-12-05 |website=The White House |language=en-US |archive-date=2023-12-05 |archive-url=https://web.archive.org/web/20231205162545/https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ |url-status=live }} It also called for foundation models to include "watermarks" to help the public discern between human and AI-generated content, which has raised controversy and criticism from deepfake detection researchers.{{Cite web |last=Lanum |first=Nikolas |date=2023-11-07 |title=President Biden's AI executive order has 'dangerous limitations,' says deepfake detection company CEO |url=https://www.foxbusiness.com/media/president-bidens-ai-executive-order-dangerous-limitations-deepfake-detection-ceo |access-date=2023-12-05 |website=FOXBusiness |language=en-US |archive-date=2023-12-05 |archive-url=https://web.archive.org/web/20231205192130/https://www.foxbusiness.com/media/president-bidens-ai-executive-order-dangerous-limitations-deepfake-detection-ceo |url-status=live }}
== 2024 ==
In February 2024, Senator Scott Wiener introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to the California legislature. The bill drew heavily on the Biden executive order.{{cite news |last=Myrow |first=Rachael |date=2024-02-16 |title=California Lawmakers Take On AI Regulation With a Host of Bills |url=https://www.kqed.org/news/11976097/california-lawmakers-take-on-ai-regulation-with-a-host-of-bills |work=KQED |access-date=2024-07-07 |archive-date=2024-07-11 |archive-url=https://web.archive.org/web/20240711051342/https://www.kqed.org/news/11976097/california-lawmakers-take-on-ai-regulation-with-a-host-of-bills |url-status=live }} It had the goal of reducing catastrophic risks by mandating safety tests for the most powerful AI models. If passed, the bill would have also established a publicly-funded cloud computing cluster in California.{{cite news |last1=De Vynck |first1=Gerrit |title=In Big Tech's backyard, California lawmaker unveils landmark AI bill |url=https://www.washingtonpost.com/technology/2024/02/08/california-legislation-artificial-intelligence-regulation/ |newspaper=The Washington Post |date=2024-02-08}} On September 29, Governor Gavin Newsom vetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses.{{cite news |last1=Lee |first1=Wendy |date=2024-09-29 |title=Gov. Gavin Newsom vetoes AI safety bill opposed by Silicon Valley |url=https://www.latimes.com/entertainment-arts/business/story/2024-09-29/gov-gavin-newsom-vetoes-ai-safety-bill-scott-wiener-sb1047 |access-date=2024-09-29 |work=Los Angeles Times}}
On March 21, 2024, the State of Tennessee enacted legislation called the ELVIS Act, aimed specifically at audio deepfakes, and voice cloning.{{cite web |url=https://www.nytimes.com/2024/03/21/us/politics/tennessee-ai-music-law.html |title=Tennessee Adopts ELVIS Act, Protecting Artists' Voices From AI Impersonation |author=Kristin Robinson |year=2024 |work=The New York Times |access-date=March 26, 2024 |archive-date=March 25, 2024 |archive-url=https://web.archive.org/web/20240325053646/https://www.nytimes.com/2024/03/21/us/politics/tennessee-ai-music-law.html |url-status=live }} This legislation was the first enacted legislation in the nation aimed at regulating AI simulation of image, voice and likeness.{{cite web |url=https://www.digitalmusicnews.com/2024/03/21/elvis-act-signed-tennessee/ |title=The ELVIS Act Has Officially Been Signed Into Law — First State-Level AI Legislation In the US |author=Ashley King |year=2024 |work=Digital Music News |access-date=March 26, 2024}} The bill passed unanimously in the Tennessee House of Representatives and Senate.{{cite web |url=https://tnga.granicus.com/player/clip/29778?view_id=703&meta_id=803640&redirect=true |title=House Floor Session – 44th Legislative Day |author=Tennessee House |year=2024 |work=Tennessee House |format=video |access-date=March 26, 2024 |archive-date=September 25, 2024 |archive-url=https://web.archive.org/web/20240925044610/https://tnga.granicus.com/player/clip/29778?view_id=703&meta_id=803640&redirect=true |url-status=live }} This legislation's success was hoped by its supporters to inspire similar actions in other states, contributing to a unified approach to copyright and privacy in the digital age, and to reinforce the importance of safeguarding artists' rights against unauthorized use of their voices and likenesses.{{cite web |url=https://www.tennessean.com/story/entertainment/music/2024/03/21/elvis-act-tennessee-gov-lee-signs-act-musicians-ai/73019388007/ |title=TN Gov. Lee signs ELVIS Act into law in honky-tonk, protects musicians from AI abuses |author=Audrey Gibbs |year=2024 |work=The Tennessean |access-date=March 26, 2024 |archive-date=September 25, 2024 |archive-url=https://web.archive.org/web/20240925043946/https://www.tennessean.com/story/entertainment/music/2024/03/21/elvis-act-tennessee-gov-lee-signs-act-musicians-ai/73019388007/ |url-status=live }}{{cite web |url=https://www.memphisflyer.com/the-elvis-act |title=The ELVIS Act |author=Alex Greene |year=2024 |work=Memphis Flyer |access-date=March 26, 2024 |archive-date=September 25, 2024 |archive-url=https://web.archive.org/web/20240925044528/https://www.memphisflyer.com/the-elvis-act |url-status=live }}
On March 13, 2024, Utah Governor Spencer Cox signed the S.B 149 "Artificial Intelligence Policy Act". This legislation goes into effect on May 1, 2024. It establishes liability, notably for companies that don't disclose their use of generative AI when required by state consumer protection laws, or when users commit criminal offense using generative AI. It also creates the Office of Artificial Intelligence Policy and the Artificial Intelligence Learning Laboratory Program.{{Cite news |last=Wodecki |first=Ben |date=March 27, 2024 |title=Utah Passes Legislation Regulating Use of Artificial Intelligence |url=https://aibusiness.com/responsible-ai/utah-passes-legislation-regulating-use-of-artificial-intelligence |work=AI Business |access-date=April 25, 2024 |archive-date=April 25, 2024 |archive-url=https://web.archive.org/web/20240425225212/https://aibusiness.com/responsible-ai/utah-passes-legislation-regulating-use-of-artificial-intelligence |url-status=live }}{{Cite web |title=SB0149 |url=https://le.utah.gov/~2024/bills/static/SB0149.html |access-date=2024-04-25 |website=le.utah.gov |archive-date=2024-04-29 |archive-url=https://web.archive.org/web/20240429183900/https://le.utah.gov/~2024/bills/static/SB0149.html |url-status=live }}
== 2025 ==
In January 2025, President Trump repealed the Biden executive order. This action reflects President Trump's preference for deregulating AI in support of innovation over safeguarding risks.{{Cite news |last=Swain |first=Gyana |date=January 21, 2025 |title=Trump repeals Biden’s AI oversight order, shifts focus to innovation-driven policies |url=https://www.cio.com/article/3806594/trump-repeals-bidens-ai-oversight-order-shifts-focus-to-innovation-driven-policies.html}}
In early 2025, Congress began advanced bipartisan legislation targeting AI-generated deepfakes, including the "TAKE IT DOWN Act," which would prohibit nonconsensual disclosure of AI-generated "intimate imagery", requiring all platforms to remove such content. Additionally, lawmakers also reintroduced the CREATE AI Act to codify the National AI Research Resource (NAIRR), which aimed to expand public access to computing resources, datasets, and AI testing environments. Additionally, the Trump administration also signed Executive Order #14179 to initiate a national “AI Action Plan”, focusing on securing U.S. global AI dominance in a way in which the White House can seek public input on AI safety and standards. At the state level, new laws have also been passed or proposed to regulate AI-generated impersonations, chatbot disclosures, and even synthetic political content. Meanwhile, the Department of Commerce also expanded export controls on AI technology, and NIST published an updated set of guidances on AI cybersecurity risks.{{Cite web |last=Johnson, Xenakis, Nonaka, Ponder, Mizerak, Gweon, Gonzalez Valenzuela, Kane, Larson, Wells |first=Jennifer, Nicholas, Mike, Jayne, John, August, Jess, Conor, Max, McCall |date=2025-04-23 |title=U.S. Tech Legislative & Regulatory Update – First Quarter 2025 |url=https://www.insideglobaltech.com/2025/04/23/u-s-tech-legislative-regulatory-update-first-quarter-2025/ |access-date=2025-05-11 |website=Inside Global Tech}}
In March 2025, OpenAI made a policy proposal for the Trump administration to preempt pending AI-related state laws with federal laws.{{cite news |date=March 31, 2025 |work=Bloomberg Law |url=https://news.bloomberglaw.com/us-law-week/openais-preemption-request-highlights-state-laws-downsides |access-date=May 21, 2025 |last=Roberts |first=Oliver |title=OpenAI’s Preemption Request Highlights State Laws’ Downsides}} Meta, Google, IBM and Andreessen Horowitz have also pressured the government to adopt national rules that would rein in state laws.{{Cite web |date=2025-05-12 |title=How Big Tech is trying to shut down California’s AI rules |url=https://www.politico.com/news/2025/05/12/how-big-tech-is-pitting-washington-against-california-00336484 |access-date=2025-05-22 |website=Politico |language=en}} In May 2025, House Republicans inserted into a tax bill a clause banning state AI laws for 10 years,{{Cite web |date=2025-05-16 |title=House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill |url=https://apnews.com/article/ai-regulation-state-moratorium-congress-39d1c8a0758ffe0242283bb82f66d51a |access-date=2025-05-22 |website=AP News |language=en}} which was met with opposition from more than 100 organizations.{{Cite web |last=Duffy |first=Clare |date=2025-05-19 |title=House Republicans want to stop states from regulating AI. More than 100 organizations are pushing back |url=https://edition.cnn.com/2025/05/19/tech/house-spending-bill-ai-provision-organizations-raise-alarm |access-date=2025-05-22 |website=CNN |language=en}}
Regulation of fully autonomous weapons
{{main|Lethal autonomous weapon|}}
Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.{{cite web|url=https://unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument|title=Background on Lethal Autonomous Weapons Systems in the CCW|publisher=United Nations Geneva|access-date=2020-05-05|archive-date=2020-04-27|archive-url=https://web.archive.org/web/20200427230529/https://unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument|url-status=live}} Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.{{cite web|url=https://www.unog.ch/80256EDD006B8954/(httpAssets)/815F8EE33B64DADDC12584B7004CF3A4/$file/CCW+MSP+2019+CRP.2+Rev+1.pdf|title=Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System|publisher=United Nations Geneva|access-date=2020-05-05|archive-date=2020-12-01|archive-url=https://web.archive.org/web/20201201120810/https://unog.ch/80256EDD006B8954/(httpAssets)/815F8EE33B64DADDC12584B7004CF3A4/$file/CCW+MSP+2019+CRP.2+Rev+1.pdf|url-status=dead}}
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation.{{Cite journal|last=Baum|first=Seth|date=2018-09-30|title=Countering Superintelligence Misinformation|journal=Information|volume=9|issue=10|page=244|doi=10.3390/info9100244|issn=2078-2489|doi-access=free}} The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots – a coalition of non-governmental organizations.{{cite web|url=https://www.stopkillerrobots.org/wp-content/uploads/2019/10/KRC_CountryViews_25Oct2019rev.pdf|title=Country Views on Killer Robots|publisher=The Campaign to Stop Killer Robots|access-date=2020-05-05|archive-date=2019-12-22|archive-url=https://web.archive.org/web/20191222104138/https://www.stopkillerrobots.org/wp-content/uploads/2019/10/KRC_CountryViews_25Oct2019rev.pdf|url-status=live}} The US government maintains that current international humanitarian law is capable of regulating the development or use of LAWS.{{Cite book |last=Sayler |first=Kelley |url=https://fas.org/sgp/crs/natsec/R45178.pdf |title=Artificial Intelligence and National Security: Updated November 10, 2020 |publisher=Congressional Research Service |year=2020 |location=Washington, DC |access-date=May 27, 2021 |archive-date=May 8, 2020 |archive-url=https://web.archive.org/web/20200508062631/https://fas.org/sgp/crs/natsec/R45178.pdf |url-status=live }} The Congressional Research Service indicated in 2023 that the US doesn't have LAWS in its inventory, but that its policy doesn't prohibit the development and employment of it.{{Cite journal |date=May 15, 2023 |title=Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems |url=https://crsreports.congress.gov/product/pdf/IF/IF11150 |journal=Congressional Research Service |access-date=October 18, 2023 |archive-date=November 1, 2023 |archive-url=https://web.archive.org/web/20231101003029/https://crsreports.congress.gov/product/pdf/IF/IF11150 |url-status=live }}
See also
- AI alignment
- Algorithmic accountability
- Algorithmic bias
- Artificial intelligence
- Artificial intelligence and elections
- Artificial intelligence arms race
- Artificial intelligence in government
- Ethics of artificial intelligence
- Government by algorithm
- Legal informatics
- Regulation of algorithms
- {{section link|Self-driving car liability|Artificial intelligence and liability}}