regulation of algorithms
{{Short description|Government regulation}}
{{distinguish|Government by algorithm}}
{{Computing law}}
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning.{{cite news |title=Algorithms have gotten out of control. It's time to regulate them. |url=https://theweek.com/articles/832948/algorithms-have-gotten-control-time-regulate |accessdate=22 March 2020 |work=theweek.com |date=3 April 2019 |language=en |archive-date=22 March 2020 |archive-url=https://web.archive.org/web/20200322114948/https://theweek.com/articles/832948/algorithms-have-gotten-control-time-regulate |url-status=live }}{{cite web |last1=Martini |first1=Mario |title=FUNDAMENTALS OF A REGULATORY SYSTEM FOR ALGORITHM-BASED PROCESSES |url=https://www.vzbv.de/sites/default/files/downloads/2019/07/19/martini_regulatory_system_algorithm_based_processes.pdf |accessdate=22 March 2020}}{{cite news |title=Rise and Regulation of Algorithms |url=https://berkeleyglobalsociety.com/en/perspectives/rise-and-regulation-of-algorithms/ |accessdate=22 March 2020 |work=Berkeley Global Society |archive-date=22 March 2020 |archive-url=https://web.archive.org/web/20200322114935/https://berkeleyglobalsociety.com/en/perspectives/rise-and-regulation-of-algorithms/ |url-status=live }} For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union.{{Cite book|last=Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body.|title=Regulation of artificial intelligence in selected jurisdictions.|oclc=1110727808}} Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging.{{Cite journal|last1=Wirtz|first1=Bernd W.|last2=Weyerer|first2=Jan C.|last3=Geyer|first3=Carolin|date=2018-07-24|title=Artificial Intelligence and the Public Sector—Applications and Challenges|journal=International Journal of Public Administration|volume=42|issue=7|pages=596–615|doi=10.1080/01900692.2018.1498103|s2cid=158829602|issn=0190-0692|url=https://zenodo.org/record/3569435|access-date=2024-09-25|archive-date=2020-08-18|archive-url=https://web.archive.org/web/20200818131415/https://zenodo.org/record/3569435|url-status=live}} Another emerging topic is the regulation of blockchain algorithms (Use of the smart contracts must be regulated) and is mentioned along with regulation of AI algorithms.{{cite book |last1=Fitsilis |first1=Fotios |title=Imposing Regulation on Advanced Algorithms |date=2019 |publisher=Springer International Publishing |isbn=978-3-030-27978-3 |url=https://www.springer.com/gp/book/9783030279783 |language=en}} Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.{{cn|date=June 2024}}
The motivation for regulation of algorithms is the apprehension of losing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—right to explanation is mandatory for those algorithms.Consumer Financial Protection Bureau, [https://www.consumerfinance.gov/eregulations/1002-9/2011-31714#1002-9-b-2 §1002.9(b)(2)]{{Cite journal|last1=Edwards|first1=Lilian|last2=Veale|first2=Michael|date=2018|title=Enslaving the Algorithm: From a 'Right to an Explanation' to a 'Right to Better Decisions'?|journal=IEEE Security & Privacy|volume=16|issue=3|pages=46–54|doi=10.1109/MSP.2018.2701152|arxiv=1803.07540 |ssrn=3052831|s2cid=4049746|url=https://strathprints.strath.ac.uk/63317/1/Edwards_Veale_SPM_2018_Enslaving_the_algorithm_from_a_right_to_an_explanation_to_a_right_to_better_decisions.pdf|access-date=2020-08-14|archive-date=2020-10-21|archive-url=https://web.archive.org/web/20201021002428/https://strathprints.strath.ac.uk/63317/1/Edwards_Veale_SPM_2018_Enslaving_the_algorithm_from_a_right_to_an_explanation_to_a_right_to_better_decisions.pdf|url-status=live}} For example, The IEEE has begun developing a new standard to explicitly address ethical issues and the values of potential future users.{{Cite journal |last1=Treleaven |first1=Philip |last2=Barnett |first2=Jeremy |last3=Koshiyama |first3=Adriano |date=February 2019 |title=Algorithms: Law and Regulation |url=https://ieeexplore.ieee.org/document/8672418 |journal=Computer |volume=52 |issue=2 |pages=32–40 |doi=10.1109/MC.2018.2888774 |s2cid=85500054 |issn=0018-9162 |access-date=2024-09-25 |archive-date=2024-08-17 |archive-url=https://web.archive.org/web/20240817152951/https://ieeexplore.ieee.org/document/8672418 |url-status=live |url-access=subscription }} Bias, transparency, and ethics concerns have emerged with respect to the use of algorithms in diverse domains ranging from criminal justice{{Cite web|title=AI is sending people to jail—and getting it wrong|url=https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/|access-date=2021-01-24|website=MIT Technology Review|date=January 21, 2019|first=Karen|last=Hao|language=en|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925230723/https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/|url-status=live}} to healthcare{{Cite journal|last=Ledford|first=Heidi|date=2019-10-24|title=Millions of black people affected by racial bias in health-care algorithms|url=https://www.nature.com/articles/d41586-019-03228-6|journal=Nature|language=en|volume=574|issue=7780|pages=608–609|doi=10.1038/d41586-019-03228-6|pmid=31664201|bibcode=2019Natur.574..608L|s2cid=204943000|access-date=2024-09-25|archive-date=2024-09-23|archive-url=https://web.archive.org/web/20240923145434/https://www.nature.com/articles/d41586-019-03228-6|url-status=live|url-access=subscription}}—many fear that artificial intelligence could replicate existing social inequalities along race, class, gender, and sexuality lines.
Regulation of artificial intelligence
{{Main|Regulation of artificial intelligence}}
=Public discussion=
In 2016, Joy Buolamwini founded Algorithmic Justice League after a personal experience with biased facial detection software in order to raise awareness of the social implications of artificial intelligence through art and research.{{cite news |last1=Lufkin |first1=Bryan |title=Algorithmic justice |url=https://www.bbc.com/worklife/article/20190718-algorithmic-justice |access-date=31 December 2020 |work=BBC Worklife |date=22 July 2019 |language=en |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925230817/https://www.bbc.com/worklife/article/20190718-algorithmic-justice |url-status=live }}
In 2017 Elon Musk advocated regulation of algorithms in the context of the existential risk from artificial general intelligence.{{cite news|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|work=NPR|date=July 17, 2017|first=Camila|last=Domonoske|accessdate=27 November 2017|language=en|archive-date=17 August 2017|archive-url=https://web.archive.org/web/20170817233809/https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|url-status=live}}{{cite news|last1=Gibbs|first1=Samuel|title=Elon Musk: regulate AI to combat 'existential threat' before it's too late|url=https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|accessdate=27 November 2017|work=The Guardian|date=17 July 2017|archive-date=6 June 2020|archive-url=https://web.archive.org/web/20200606072024/https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo|url-status=live}}{{cite news|last1=Kharpal|first1=Arjun|title=A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says|url=https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|accessdate=27 November 2017|work=CNBC|date=7 November 2017|archive-date=22 March 2020|archive-url=https://web.archive.org/web/20200322115325/https://www.cnbc.com/2017/11/07/ai-infancy-and-too-early-to-regulate-intel-ceo-brian-krzanich-says.html|url-status=live}} According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.{{cite journal|last1=Kaplan|first1=Andreas|last2=Haenlein|first2=Michael|year=2019|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|journal=Business Horizons|volume=62|pages=15–25|doi=10.1016/j.bushor.2018.08.004|s2cid=158433736 }} One suggestion has been for the development of a global governance board to regulate AI development.{{Cite journal|last1=Boyd|first1=Matthew|last2=Wilson|first2=Nick|date=2017-11-01|title=Rapid developments in Artificial Intelligence: how might the New Zealand government respond?|journal=Policy Quarterly|volume=13|issue=4|doi=10.26686/pq.v13i4.4619|issn=2324-1101|doi-access=free}} In 2020, the European Union published its draft strategy paper for promoting and regulating AI.
Algorithmic tacit collusion is a legally dubious antitrust practise committed by means of algorithms, which the courts are not able to prosecute.{{cite journal |last1=Ezrachi |first1=A. |last2=Stucke |first2=M. E. |title=Sustainable and unchallenged algorithmic tacit collusion |journal=Northwestern Journal of Technology & Intellectual Property |date=13 March 2020 |volume=17 |issue=2 |language=en |issn=1549-8271}} This danger concerns scientists and regulators in EU, US and beyond. European Commissioner Margrethe Vestager mentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows:{{cite web |last1=VESTAGER |first1=Margrethe |title=Algorithms and competition |url=https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/bundeskartellamt-18th-conference-competition-berlin-16-march-2017_en |archive-url=https://web.archive.org/web/20170630162633/https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/bundeskartellamt-18th-conference-competition-berlin-16-march-2017_en |url-status=dead |archive-date=2017-06-30 |publisher=European Commission |access-date=1 May 2021 |format=Bundeskartellamt 18th Conference on Competition |date=2017}}
"A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy."
{{anchor|SyRI}}In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators.{{cite magazine |title=Europe Limits Government by Algorithm. The US, Not So Much |url=https://www.wired.com/story/europe-limits-government-algorithm-us-not-much/ |magazine=Wired |accessdate=11 April 2020 |language=en |last1=Simonite |first1=Tom |date=February 7, 2020 |archive-date=11 April 2020 |archive-url=https://web.archive.org/web/20200411072135/https://www.wired.com/story/europe-limits-government-algorithm-us-not-much/ |url-status=live }} This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR).Rechtbank Den Haag 5 February 2020, C-09-550982-HA ZA 18-388 (English), {{ECLI|ECLI:NL:RBDHA:2020:1878}}
In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm."{{cite magazine |title=Skewed Grading Algorithms Fuel Backlash Beyond the Classroom |url=https://www.wired.com/story/skewed-grading-algorithms-fuel-backlash-beyond-classroom/ |accessdate=26 September 2020 |magazine=Wired |language=en-us |archive-date=20 September 2020 |archive-url=https://web.archive.org/web/20200920152338/https://www.wired.com/story/skewed-grading-algorithms-fuel-backlash-beyond-classroom/ |url-status=live }} This protest was successful and the grades were taken back.{{cite news |last1=Reuter |first1=Markus |title=Fuck the Algorithm - Jugendproteste in Großbritannien gegen maschinelle Notenvergabe erfolgreich |url=https://netzpolitik.org/2020/fuck-the-algorithm-jugendproteste-in-grossbritannien-gegen-maschinelle-notenvergabe-erfolgreich/ |accessdate=3 October 2020 |work=netzpolitik.org |date=17 August 2020 |language=de-DE |archive-date=19 September 2020 |archive-url=https://web.archive.org/web/20200919033038/https://netzpolitik.org/2020/fuck-the-algorithm-jugendproteste-in-grossbritannien-gegen-maschinelle-notenvergabe-erfolgreich/ |url-status=live }}
=Implementation=
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national,{{Cite journal|last=Bredt|first=Stephan|date=2019-10-04|title=Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies|journal=Frontiers in Artificial Intelligence|volume=2|page=16 |doi=10.3389/frai.2019.00016|pmid=33733105 |pmc=7861258 |issn=2624-8212|doi-access=free}} and international levels{{Cite book|url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|title=White Paper: On Artificial Intelligence – A European approach to excellence and trust|publisher=European Commission|year=2020|location=Brussels|pages=1|access-date=2020-03-27|archive-date=2020-02-20|archive-url=https://web.archive.org/web/20200220173419/https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|url-status=live}} and in fields from public service management{{Cite journal|last1=Wirtz|first1=Bernd W.|last2=Müller|first2=Wilhelm M.|date=2018-12-03|title=An integrated artificial intelligence framework for public management|journal=Public Management Review|volume=21|issue=7|pages=1076–1100|doi=10.1080/14719037.2018.1549268|s2cid=158267709|issn=1471-9037}} to law enforcement, the financial sector, robotics,{{Cite journal|last1=Iphofen|first1=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|volume=16 |issue=2 |pages=170–184|doi=10.1080/21582041.2018.1563803|s2cid=59298502 |issn=2158-2041}} the military,{{Cite book|last=United States. Defense Innovation Board.|title=AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense|oclc=1126650738}} and international law.{{cite news|url=https://www.snopes.com/2017/04/21/robots-with-guns/|title=Robots with Guns: The Rise of Autonomous Weapons Systems|date=21 April 2017|work=Snopes.com|accessdate=24 December 2017|archive-date=25 September 2024|archive-url=https://web.archive.org/web/20240925043120/https://www.snopes.com/news/2017/04/21/robots-with-guns/|url-status=live}}{{Cite journal|url=https://dash.harvard.edu/handle/1/33813394|title=No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law|last=Bento|first=Lucas|date=2017|website=Harvard Scholarship Depository|accessdate=2019-09-14|archive-date=2020-03-23|archive-url=https://web.archive.org/web/20200323111054/https://dash.harvard.edu/handle/1/33813394|url-status=live}} There are many concerns that there is not enough visibility and monitoring of AI in these sectors.{{cite web |last1=MacCarthy |first1=Mark |title=AI Needs More Regulation, Not Less |url=https://www.brookings.edu/research/ai-needs-more-regulation-not-less/ |website=Brookings |date=9 March 2020 |access-date=25 September 2024 |archive-date=24 April 2023 |archive-url=https://web.archive.org/web/20230424210332/https://www.brookings.edu/research/ai-needs-more-regulation-not-less/ |url-status=live }} In the United States financial sector, for example, there have been calls for the Consumer Financial Protection Bureau to more closely examine source code and algorithms when conducting audits of financial institutions' non-public data.{{cite journal |last1=Van Loo |first1=Rory |title=Technology Regulation by Default: Platforms, Privacy, and the CFPB |journal=Georgetown Law Technology Review |date=July 2018 |volume=2 |issue=1 |pages=542–543 |url=https://scholarship.law.bu.edu/faculty_scholarship/355 |access-date=2024-09-25 |archive-date=2021-01-17 |archive-url=https://web.archive.org/web/20210117134649/https://scholarship.law.bu.edu/faculty_scholarship/355/ |url-status=live }}
In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.{{Cite web|url=https://www.insidetechmedia.com/2020/01/14/ai-update-white-house-issues-10-principles-for-artificial-intelligence-regulation/|title=AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation|date=2020-01-14|website=Inside Tech Media|language=en-US|access-date=2020-03-25|archive-date=2020-03-25|archive-url=https://web.archive.org/web/20200325190748/https://www.insidetechmedia.com/2020/01/14/ai-update-white-house-issues-10-principles-for-artificial-intelligence-regulation/|url-status=live}}{{Cite book|url=https://bidenwhitehouse.archives.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf|title=Memorandum for the Heads of Executive Departments and Agencies|publisher=White House Office of Science and Technology Policy|year=2020|location=Washington, D.C.|access-date=2020-03-27|archive-date=2020-03-18|archive-url=https://web.archive.org/web/20200318001101/https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf|url-status=live}} In response, the National Institute of Standards and Technology has released a position paper,{{Cite book|url=https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf|title=U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools|publisher=National Institute of Science and Technology|year=2019|access-date=2020-03-27|archive-date=2020-03-25|archive-url=https://web.archive.org/web/20200325190745/https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf|url-status=live}} the National Security Commission on Artificial Intelligence has published an interim report,{{Cite book|url=https://drive.google.com/file/d/153OrxnuGEjsUvlxWsFYauslwNeCEkvUb/view|title=NSCAI Interim Report for Congress|newspaper=Google Docs |publisher=The National Security Commission on Artificial Intelligence|year=2019|access-date=2020-03-27|archive-date=2021-09-10|archive-url=https://web.archive.org/web/20210910165838/https://drive.google.com/file/d/153OrxnuGEjsUvlxWsFYauslwNeCEkvUb/view|url-status=live}} and the Defense Innovation Board has issued recommendations on the ethical use of AI.{{Cite book|url=https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF|title=AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense|publisher=Defense Innovation Board|year=2020|location=Washington, DC|access-date=2020-03-27|archive-date=2020-01-14|archive-url=https://web.archive.org/web/20200114222649/https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF|url-status=dead}}
In April 2016, for the first time in more than two decades, the European Parliament adopted a set of comprehensive regulations for the collection, storage, and use of personal information, the General Data Protection Regulation (GDPR)1 (European Union, Parliament and Council 2016).[6] The GDPR's policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design.{{Cite journal |last1=Goodman |first1=Bryce |last2=Flaxman |first2=Seth |date=2017-10-02 |title=European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" |url=https://ojs.aaai.org/index.php/aimagazine/article/view/2741 |journal=AI Magazine |language=en |volume=38 |issue=3 |pages=50–57 |doi=10.1609/aimag.v38i3.2741 |arxiv=1606.08813 |s2cid=7373959 |issn=2371-9621 |access-date=2024-09-25 |archive-date=2022-12-24 |archive-url=https://web.archive.org/web/20221224115811/https://ojs.aaai.org/index.php/aimagazine/article/view/2741 |url-status=live }}
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation.{{Cite journal|last=Baum|first=Seth|date=2018-09-30|title=Countering Superintelligence Misinformation|journal=Information|volume=9|issue=10|pages=244|doi=10.3390/info9100244|issn=2078-2489|doi-access=free}} In the United States, steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.{{Cite web|url=https://www.congress.gov/bill/115th-congress/house-bill/5356|title=H.R.5356 – 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018|last=Stefanik|first=Elise M.|date=2018-05-22|website=www.congress.gov|access-date=2020-03-13|archive-date=2020-03-23|archive-url=https://web.archive.org/web/20200323111045/https://www.congress.gov/bill/115th-congress/house-bill/5356|url-status=live}}
In 2017, the U.K. Vehicle Technology and Aviation Bill imposes liability on the owner of an uninsured automated vehicle when driving itself and makes provisions for cases where the owner has made “unauthorized alterations” to the vehicle or failed to update its software. Further ethical issues arise when, e.g., a self-driving car swerves to avoid a pedestrian and causes a fatal accident.{{Cite web |title=The Highway Code - Introduction - Guidance - GOV.UK |url=https://www.gov.uk/guidance/the-highway-code/introduction#self-driving-vehicles |access-date=2022-11-30 |website=www.gov.uk |language=en |archive-date=2022-11-30 |archive-url=https://web.archive.org/web/20221130222304/https://www.gov.uk/guidance/the-highway-code/introduction#self-driving-vehicles |url-status=live }}
In 2021, the European Commission proposed the Artificial Intelligence Act.{{Cite news |date=2021-10-18 |title=Why the world needs a Bill of Rights on AI |work=Financial Times |url=https://www.ft.com/content/17ca620c-4d76-4a2f-829a-27d8552ce719 |access-date=2023-03-19 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925230728/https://www.ft.com/content/17ca620c-4d76-4a2f-829a-27d8552ce719 |url-status=live }}
Algorithm certification
There is a concept of algorithm certification emerging as a method of regulating algorithms. Algorithm certification involves auditing whether the algorithm used during the life cycle 1) conforms to the protocoled requirements (e.g., for correctness, completeness, consistency, and accuracy); 2) satisfies the standards, practices, and conventions; and 3) solves the right problem (e.g., correctly model physical laws), and satisfies the intended use and user needs in the operational environment.
Regulation of blockchain algorithms
{{See also|Bitcoin#Legal status, tax and regulation|Legality of bitcoin by country or territory|Distributed ledger technology law|Smart contract}}
Blockchain systems provide transparent and fixed records of transactions and hereby contradict the goal of the European GDPR, which is to give individuals full control of their private data.{{Cite web|url=https://www.siliconrepublic.com/enterprise/blockchain-gdpr-report-bai|title=A recent report issued by the Blockchain Association of Ireland has found there are many more questions than answers when it comes to GDPR|website=siliconrepublic.com|date=23 November 2017|access-date=5 March 2018|archive-url=https://web.archive.org/web/20180305202537/https://www.siliconrepublic.com/enterprise/blockchain-gdpr-report-bai|archive-date=5 March 2018|url-status=live|df=dmy-all}}{{cite web |title=Blockchain and the General Data Protection Regulation - Think Tank |url=https://www.europarl.europa.eu/thinktank/de/document.html?reference=EPRS_STU%282019%29634445 |website=www.europarl.europa.eu |accessdate=28 March 2020 |language=de |archive-date=4 August 2020 |archive-url=https://web.archive.org/web/20200804042536/https://europarl.europa.eu/thinktank/de/document.html?reference=EPRS_STU(2019)634445 |url-status=live }}
By implementing the Decree on Development of Digital Economy, Belarus has become the first-ever country to legalize smart contracts. Belarusian lawyer Denis Aleinikov is considered to be the author of a smart contract legal concept introduced by the decree.{{cite news |url=https://www.reuters.com/article/us-belarus-cryptocurrency-idUSKBN1EG0XO |title=Belarus adopts crypto-currency law to woo foreign investors |last=Makhovsky |first=Andrei |date=December 22, 2017 |work=Reuters |access-date=April 21, 2020 |archive-date=February 9, 2019 |archive-url=https://web.archive.org/web/20190209124523/https://www.reuters.com/article/us-belarus-cryptocurrency-idUSKBN1EG0XO |url-status=live }}{{cite web |url=https://www2.deloitte.com/content/dam/Deloitte/ru/Documents/tax/lt-in-focus/english/2017/27-12-en.pdf |title=Belarus Enacts Unique Legal Framework for Crypto Economy Stakeholders |date=December 27, 2017 |publisher=Deloitte |access-date=April 21, 2020 |archive-date=May 21, 2020 |archive-url=https://web.archive.org/web/20200521070909/https://www2.deloitte.com/content/dam/Deloitte/ru/Documents/tax/lt-in-focus/english/2017/27-12-en.pdf |url-status=live }}{{cite web |url=https://emerging-europe.com/business/ict-given-huge-boost-in-belarus/ |title=ICT Given Huge Boost in Belarus |last=Patricolo |first=Claudia |date=December 26, 2017 |publisher=Emerging Europe |access-date=April 21, 2020 |archive-date=September 19, 2020 |archive-url=https://web.archive.org/web/20200919152806/https://emerging-europe.com/business/ict-given-huge-boost-in-belarus/ |url-status=live }} There are strong arguments that the existing US state laws are already a sound basis for the smart contracts' enforceability — Arizona, Nevada, Ohio and Tennessee have amended their laws specifically to allow for the enforceability of blockchain-based contracts nevertheless.{{cite web |last1=Levi |first1=Stuart |last2=Lipton |first2=Alex |last3=Vasile |first3=Christina |title=Blockchain Laws and Regulations {{!}} 13 Legal issues surrounding the use of smart contracts {{!}} GLI |url=https://www.globallegalinsights.com/practice-areas/blockchain-laws-and-regulations/13-legal-issues-surrounding-the-use-of-smart-contracts |website=GLI - Global Legal InsightsInternational legal business solutions |accessdate=21 April 2020 |language=en |date=2020 |archive-date=25 September 2020 |archive-url=https://web.archive.org/web/20200925065117/https://www.globallegalinsights.com/practice-areas/blockchain-laws-and-regulations/13-legal-issues-surrounding-the-use-of-smart-contracts |url-status=live }}
Regulation of robots and autonomous algorithms
There have been proposals to regulate robots and autonomous algorithms. These include:
- the South Korean Government's proposal in 2007 of a Robot Ethics Charter;
- a 2011 proposal from the U.K. Engineering and Physical Sciences Research Council of five ethical “principles for designers, builders, and users of robots”;
- the Association for Computing Machinery's seven principles for algorithmic transparency and accountability, published in 2017.
In popular culture
In 1942, author Isaac Asimov addressed regulation of algorithms by introducing the fictional Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders {{sic|given it|expected=given to it|hide=y}} by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.{{cite book |last1=Asimov |first1=Isaac |title=I, Robot |date=1950 |publisher=Doubleday |location=New York City |isbn=978-0-385-42304-5 |page=40 |edition=The Isaac Asimov Collection |language=en |chapter=Runaround |quote=This is an exact transcription of the laws. They also appear in the front of the book, and in both places there is no "to" in the 2nd law.}}
The main alternative to regulation is a ban, and the banning of algorithms is presently highly unlikely. However, in Frank Herbert's Dune universe, thinking machines is a collective term for artificial intelligence, which were completely destroyed and banned after a revolt known as the Butlerian Jihad:{{cite book |last=Herbert |first=Frank |title=Dune Messiah |title-link=Dune Messiah |year=1969 }}
JIHAD, BUTLERIAN: (see also Great Revolt) — the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in the O.C. Bible as "Thou shalt not make a machine in the likeness of a human mind."{{cite book |last=Herbert |first=Frank |title=Dune |url=https://archive.org/details/dune0000herb |url-access=registration |chapter=Terminology of the Imperium: JIHAD, BUTLERIAN |date=1965|publisher=Philadelphia, Chilton Books }}