Lethal autonomous weapon
{{Short description|Autonomous military technology system}}
File:Sloboda_2019_-_defile_10_-_Land_Rover_Defender_i_robot_Miloš_06.jpg]]
Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of systems {{as of|2018|lc=y}} was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
Being autonomous as a weapon
Being "autonomous" has different meanings in different fields of study. In terms of military weapon development, the identification of a weapon as autonomous is not as clear as in other areas.{{cite journal |last=Crootof |first=Rebecca |date=2015 |title=The Killer Robots Are Here: Legal and Policy Implications |url=https://heinonline.org/HOL/Page?collection=journals&handle=hein.journals/cdozo36&id=1943 |journal=Cardozo L. Rev. |volume=36 |pages=1837 |via=heinonline.org}} The specific standard entailed in the concept of being autonomous can vary hugely between different scholars, nations and organizations.
The official United States Department of Defense Policy on Autonomy in Weapon Systems, defines an Autonomous Weapons Systems as, "A weapon system that, once activated, can select and engage targets without further intervention by a human operator."{{cite journal |last1=Allen |first1=Gregory |title=DOD Is Updating Its Decade-Old Autonomous Weapons Policy, but Confusion Remains Widespread |url=https://www.csis.org/analysis/dod-updating-its-decade-old-autonomous-weapons-policy-confusion-remains-widespread |website=Center for Strategic and International Studies |date=6 June 2022 |access-date=24 July 2022}} Heather Roff, a writer for Case Western Reserve University School of Law, describes autonomous weapon systems as "armed weapons systems, capable of learning and adapting their 'functioning in response to changing circumstances in the environment in which [they are] deployed,' as well as capable of making firing decisions on their own."{{Cite web|url=https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1006&context=jil|title=Lethal Autonomous Weapons and Jus Ad Bellum Proportionality|last=Roff|first=Heather|date=2015}}
The British Ministry of Defence defines autonomous weapon systems as "systems that are capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control - such human engagement with the system may still be present, though. While the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be."{{Cite web|url=https://www.gov.uk/government/publications/unmanned-aircraft-systems-jdp-0-302|title=Unmanned aircraft systems (JDP 0-30.2)|website=GOV.UK|language=en|access-date=2018-06-08}}
{{quote box|An artificial agent which, at the very minimum, is able to change its own internal states to achieve a given goal, or set of goals, within its dynamic operating environment and without the direct intervention of another agent and may also be endowed with some abilities for changing its own transition rules without the intervention of another agent, and which is deployed with the purpose of exerting kinetic force against a physical entity (whether an object or a human being) and to this end is able to identify, select or attack the target without the intervention of another agent is an AWS. Once deployed, AWS can be operated with or without some forms of human control (in, on or out the loop). A lethal AWS is specific subset of an AWS with the goal of exerting kinetic force against human beings.{{cite journal |last1=Taddeo |first1=Mariarosaria |last2=Blanchard |first2=Alexander |title=A Comparative Analysis of the Definitions of Autonomous Weapons Systems |journal=Science and Engineering Ethics |date=2022 |volume=28 |pages=37 |doi=10.1007/s11948-022-00392-3 |doi-access=free|pmc=9399191 }}|align=centre}}
Scholars such as Peter Asaro and Mark Gubrud believe that any weapon system that is capable of releasing a lethal force without the operation, decision, or confirmation of a human supervisor can be deemed autonomous.{{Cite journal|last=Asaro|first=Peter|date=2012|title=On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making|journal=Red Cross|volume=687|pages=94|url=https://www.cambridge.org/core/journals/international-review-of-the-red-cross/article/on-banning-autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decisionmaking/992565190BF2912AFC5AC0657AFECF07}}{{Cite news|url=http://gubrud.net/?p=272|title=Autonomy without Mystery: Where do you draw the line?|date=2014-05-09|work=1.0 Human|access-date=2018-06-08|language=en-US|archive-date=2018-11-16|archive-url=https://web.archive.org/web/20181116110640/http://gubrud.net/?p=272|url-status=dead}}
As a result, the composition of a treaty between states requires a commonly accepted labeling of what exactly constitutes an autonomous weapon.{{Cite book|url=https://www.taylorfrancis.com/books/9781317109129|title= Killer Robots: Legality and Ethicality of Autonomous Weapons |publisher=Taylor & Francis |year=2016|doi=10.4324/9781315591070|language=en|access-date=2018-06-08|last1=Krishnan|first1=Armin|isbn=9781317109129}}
Automatic defensive systems
The oldest automatically triggered lethal weapon is the land mine, used since at least the 1600s, and naval mines, used since at least the 1700s. Anti-personnel mines are banned in many countries by the 1997 Ottawa Treaty, not including the United States, Russia, and much of Asia and the Middle East.
Some current examples of LAWs are automated "hardkill" active protection systems, such as a radar-guided CIWS systems used to defend ships that have been in use since the 1970s (e.g., the US Phalanx CIWS). Such systems can autonomously identify and attack oncoming missiles, rockets, artillery fire, aircraft and surface vessels according to criteria set by the human operator. Similar systems exist for tanks, such as the Russian Arena, the Israeli Trophy, and the German AMAP-ADS. Several types of stationary sentry guns, which can fire at humans and vehicles, are used in South Korea and Israel. Many missile defence systems, such as Iron Dome, also have autonomous targeting capabilities.
The main reason for not having a "human in the loop" in these systems is the need for rapid response. They have generally been used to protect personnel and installations against incoming projectiles.
Autonomous offensive systems
According to The Economist, as technology advances, future applications of unmanned undersea vehicles might include mine clearance, mine-laying, anti-submarine sensor networking in contested waters, patrolling with active sonar, resupplying manned submarines, and becoming low-cost missile platforms.{{cite news |title=Getting to grips with military robotics |url=https://www.economist.com/news/special-report/21735478-autonomous-robots-and-swarms-will-change-nature-warfare-getting-grips |newspaper=The Economist |date=25 January 2018 |access-date=7 February 2018}} In 2018, the U.S. Nuclear Posture Review alleged that Russia was developing a "new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo" named "Status 6".{{cite news |title=US says Russia 'developing' undersea nuclear-armed torpedo |url=https://www.cnn.com/2018/02/02/politics/pentagon-nuclear-posture-review-russian-drone/index.html |publisher=CNN |date=3 February 2018 |access-date=7 February 2018}}
The Russian Federation is currently developing artificially intelligent missiles,{{cite web |title=Russia is building a missile that can makes its own decisions |website=Newsweek |url=http://www.newsweek.com/russia-military-challenge-us-china-missile-own-decisions-639926 |date=20 July 2017}} drones,{{cite news |url=https://www.rbth.com/defence/2017/05/31/russias-digital-weapons-robots-and-artificial-intelligence-prepare-for-wa_773677 |title=Russia's digital doomsday weapons: Robots prepare for war - Russia Beyond |newspaper=Russia Beyond |date=2017-05-31 |last1=Litovkin |first1=Nikolai }} unmanned vehicles, military robots and medic robots.{{cite web | url=https://www.rbth.com/defence/2017/08/09/comrade-in-arms-russia-is-developing-a-freethinking-war-machine_819686 | title='Comrade in Arms': Russia is developing a freethinking war machine| date=2017-08-09}}{{cite web | url=https://www.rbth.com/defence/2017/06/06/rise-of-the-machines-a-look-at-russias-latest-combat-robots_777480 | title=Rise of the Machines: A look at Russia's latest combat robots| date=2017-06-06}}{{cite news | url=https://www.rbth.com/science_and_tech/2016/02/10/is-terminator-back-russians-make-major-advances-in-artificial-intelligence_566553 | title=Is Terminator back? Russians make major advances in artificial intelligence| newspaper=Russia Beyond| date=10 February 2016}}{{cite web| url=https://www.rbth.com/news/2017/05/15/virtual-trainer-for-robots-and-drones-developed-in-russia_763016| title=Virtual trainer for robots and drones developed in Russia| date=15 May 2017| access-date=3 September 2017| archive-date=11 October 2017| archive-url=https://web.archive.org/web/20171011205117/https://www.rbth.com/news/2017/05/15/virtual-trainer-for-robots-and-drones-developed-in-russia_763016| url-status=dead}}
Israeli Minister Ayoob Kara stated in 2017 that Israel is developing military robots, including ones as small as flies.{{cite web |title=Kara: I wasn't revealing state secrets about the robots |url=http://www.jpost.com/Israel-News/Politics-And-Diplomacy/Kara-I-wasnt-revealing-state-secrets-about-the-robots-482616 |work=The Jerusalem Post}}
In October 2018, Zeng Yi, a senior executive at the Chinese defense firm Norinco, gave a speech in which he said that "In future battlegrounds, there will be no people fighting", and that the use of lethal autonomous weapons in warfare is "inevitable".{{cite web |last=Allen |first=Gregory |title=Understanding China's AI Strategy |url=https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy |website=Center for a New American Security |access-date=11 March 2019}} In 2019, US Defense Secretary Mark Esper lashed out at China for selling drones capable of taking life with no human oversight.{{cite news |title=Is China exporting killer robots to Mideast? |url=https://www.asiatimes.com/2019/11/article/is-china-exporting-killer-robots-to-mideast/ |work=Asia Times |date=28 November 2019 |access-date=21 December 2019}}
The British Army deployed new unmanned vehicles and military robots in 2019.{{cite web |url=https://www.armyrecognition.com/march_2019_global_defense_security_army_news_industry/british_army_to_operationally_deploy_new_robots_in_2019.html |title=British Army to operationally deploy new robots in 2019 | March 2019 Global Defense Security army news industry | Defense Security global news industry army 2019 | Archive News year }}
The US Navy is developing "ghost" fleets of unmanned ships.{{Cite web|url=http://navyrecognition.com/index.php/news/defence-news/2019/march/6898-us-navy-plans-to-build-an-unmanned-ghost-fleet.html|title = US Navy plans to build an unmanned Ghost Fleet}}
In 2020 a Kargu 2 drone hunted down and attacked a human target in Libya, according to a report from the UN Security Council's Panel of Experts on Libya, published in March 2021. This may have been the first time an autonomous killer robot armed with lethal weaponry attacked human beings.{{Cite web|last=Hambling|first=David|title=Drones may have attacked humans fully autonomously for the first time|url=https://www.newscientist.com/article/2278852-drones-may-have-attacked-humans-fully-autonomously-for-the-first-time/|access-date=2021-05-30|website=New Scientist|language=en-US}}{{Cite web|date=2021-05-29|title=Killer drone 'hunted down a human target' without being told to|url=https://www.foxnews.com/world/killer-drone-hunted-down-a-human-target-without-being-told-to|access-date=2021-05-30|website=Fox News|language=en-US}}
In May 2021 Israel conducted an AI guided combat drone swarm attack in Gaza.{{Cite web|last=Hambling|first=David|title=Israel used world's first AI-guided combat drone swarm in Gaza attacks|url=https://www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks/|access-date=2023-01-15|website=New Scientist|language=en-US |date=30 June 2021}}
Since then there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world.{{cite web|url=https://autonomousweapons.org/|title=SLAUGHTERBOTS ARE HERE.}}
In addition, DARPA is working on making swarms of 250 autonomous lethal drones available to the American Military.{{cite web|url=https://thedebrief.org/darpa-dream-of-a-tiny-robot-army-is-close-to-becoming-a-reality/|title=DARPA’s Dream of a Tiny Robot Army Is Close To Becoming a Reality |date=December 1, 2020 |work=The Debrief |first=Tim |last=McMillan}}
Ethical and legal issues
= Degree of human control =
Three classifications of the degree of human control of autonomous weapon systems were laid out by Bonnie Docherty in a 2012 Human Rights Watch report.{{cite web|url=https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/|archive-url=https://web.archive.org/web/20170609090053/http://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/|url-status=dead|archive-date=June 9, 2017|title=Pros and Cons of Autonomous Weapons Systems|author1=Amitai Etzioni|author2=Oren Etzioni|work=army.mil|date=June 2017}}
- human-in-the-loop: a human must instigate the action of the weapon (in other words not fully autonomous).
- human-on-the-loop: a human may abort an action.
- human-out-of-the-loop: no human action is involved.
= Standard used in US policy =
Current US policy states: "Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."{{Cite web|url=http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf|archive-url=https://web.archive.org/web/20121201105512/http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf|url-status=dead|archive-date=December 1, 2012|title=Directive 3000.09, Autonomy in weapon systems|date=2012|author=US Department of Defense|page=2}} However, the policy requires that autonomous weapon systems that kill people or use kinetic force, selecting and engaging targets without further human intervention, be certified as compliant with "appropriate levels" and other standards, not that such weapon systems cannot meet these standards and are therefore forbidden.{{Cite journal|last=Gubrud|first=Mark|date=April 2015|title=Semi-autonomous and on their own: Killer robots in Plato's Cave|url=https://thebulletin.org/2015/04/semi-autonomous-and-on-their-own-killer-robots-in-platos-cave/|url-status=live|journal=Bulletin of the Atomic Scientists|access-date=2017-10-30|archive-date=2017-05-09|archive-url=https://web.archive.org/web/20170509042433/http://thebulletin.org/semi-autonomous-and-their-own-killer-robots-plato%E2%80%99s-cave8199}} "Semi-autonomous" hunter-killers that autonomously identify and attack targets do not even require certification. Deputy Defense Secretary Robert O. Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision", but might need to reconsider this since "authoritarian regimes" may do so.{{cite news|url=https://www.bostonglobe.com/news/nation/2016/03/30/the-killer-robot-threat-pentagon-examining-how-enemies-could-empower-machines/sFri6ZDifwIcQR2UgyXlQI/story.html|title=Pentagon examining the 'killer robot' threat|date=30 March 2016|newspaper=Boston Globe |first=Dan |last=Lamothe}} In October 2016 President Barack Obama stated that early in his career he was wary of a future in which a US president making use of drone warfare could "carry on perpetual wars all over the world, and a lot of them covert, without any accountability or democratic debate".{{cite web|url=https://nymag.com/daily/intelligencer/2016/10/barack-obama-on-5-days-that-shaped-his-presidency.html|title=Barack Obama on 5 Days That Shaped His Presidency|publisher=Daily Intelligencer|access-date=3 January 2017|date=2016-10-03 |first=Jonathan |last=Chait}}{{cite web|url=https://theintercept.com/2016/10/03/obama-worries-future-presidents-will-wage-perpetual-covert-drone-war/|title=Obama Worries Future Presidents Will Wage Perpetual, Covert Drone War|last1=Devereaux|first1=Ryan|last2=Emmons|first2=Alex|publisher=The Intercept|access-date=3 January 2017|date=2016-10-03}} In the US, security-related AI has fallen under the purview of the National Security Commission on Artificial Intelligence since 2018.{{Cite web|url=https://www.congress.gov/bill/115th-congress/house-bill/5356|title=H.R.5356 - 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018|last=Stefanik|first=Elise M.|date=2018-05-22|website=www.congress.gov|access-date=2020-03-13}}{{Cite journal|last=Baum|first=Seth|date=2018-09-30|title=Countering Superintelligence Misinformation|journal=Information|volume=9|issue=10|pages=244|doi=10.3390/info9100244|issn=2078-2489|doi-access=free}} On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. A major concern is how the report will be implemented.{{Cite book|last=United States. Defense Innovation Board.|title=AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense|oclc=1126650738}}
= Possible violations of ethics and international acts =
Stuart Russell, professor of computer science from University of California, Berkeley stated the concern he has with LAWs is that his view is that it is unethical and inhumane. The main issue with this system is it is hard to distinguish between combatants and non-combatants.{{Cite journal|last=Russell|first=Stuart|date=27 May 2015|title=Take a stand on AI weapons|url=http://www.nature.com/news/robotics-ethics-of-artificial-intelligence-1.17611|journal=International Weekly Journal of Science|volume=521}}
There is concern by some economists{{Cite journal|last1=Coyne|first1=Christopher|last2=Alshamy|first2=Yahya A.|date=2021-04-03|title=Perverse Consequences of Lethal Autonomous Weapons Systems|url=https://doi.org/10.1080/10402659.2021.1998747|journal=Peace Review|volume=33|issue=2|pages=190–198|doi=10.1080/10402659.2021.1998747|s2cid=233764057 |issn=1040-2659}} and legal scholars about whether LAWs would violate International Humanitarian Law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and the principle of proportionality, which requires that damage to civilians be proportional to the military aim.{{Cite journal|last=Sharkey|first=Noel E.|date=June 2012|title=The evitability of autonomous robot warfare*|url=https://www.cambridge.org/core/journals/international-review-of-the-red-cross/article/evitability-of-autonomous-robot-warfare/35D0C3294D834F23BF1C0B33FC51A166|journal=International Review of the Red Cross|language=en|volume=94|issue=886|pages=787–799|doi=10.1017/S1816383112000732|s2cid=145682587 |issn=1816-3831}} This concern is often invoked as a reason to ban "killer robots" altogether - but it is doubtful that this concern can be an argument against LAWs that do not violate International Humanitarian Law.{{Cite book|url=http://philpapers.org/rec/MLLAKR|title=Autonomous killer robots are probably good news|pages=67–81|last=Müller|first=Vincent C.|date=2016|publisher=Ashgate}}{{Cite journal|last1=Umbrello|first1=Steven|last2=Torres|first2=Phil|last3=De Bellis|first3=Angelo F.|date=2020-03-01|title=The future of war: could lethal autonomous weapons make conflict more ethical?|url=https://doi.org/10.1007/s00146-019-00879-x|journal=AI & Society |language=en|volume=35|issue=1|pages=273–282|doi=10.1007/s00146-019-00879-x|hdl=2318/1699364|s2cid=59606353|issn=1435-5655|hdl-access=free}}{{Cite journal|last1=Umbrello|first1=Steven|last2=Wood|first2=Nathan Gabriel|date=2021-04-20|title=Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status|journal=Information|volume=12|issue=5|pages=216|doi=10.3390/info12050216|doi-access=free|hdl=1854/LU-8709449|hdl-access=free}}
A 2021 report by the American Congressional Research Service states that "there are no domestic or international legal prohibitions on the development of use of LAWs," although it acknowledges ongoing talks at the UN Convention on Certain Conventional Weapons (CCW).{{cite report |author=Kelley M. Sayler |date=June 8, 2021 |title=Defense Primer: Emerging Technologies |url=https://fas.org/sgp/crs/natsec/IF11105.pdf |publisher=Congressional Research Service |access-date=July 22, 2021}}
LAWs are said by some to blur the boundaries of who is responsible for a particular killing.{{Cite web|url=https://works.bepress.com/nym/9/download/|title=Doctor of Philosophy Thesis in Military Informatics (OpenPhD #openphd ) : Lethal Autonomy of Weapons is Designed and/or Recessive|last=Nyagudi|first=Nyagudi Musandu|date=2016-12-09|access-date=2017-01-06|archive-date=2017-01-07|archive-url=https://web.archive.org/web/20170107004021/https://works.bepress.com/nym/9/download/|url-status=dead}} Philosopher Robert Sparrow argues that autonomous weapons are causally but not morally responsible, similar to child soldiers. In each case, he argues there is a risk of atrocities occurring without an appropriate subject to hold responsible, which violates jus in bello.{{Cite journal|last=SPARROW|first=ROBERT|date=2007|title=Killer Robots|url=https://www.jstor.org/stable/24355087|journal=Journal of Applied Philosophy|volume=24|issue=1|pages=62–77|doi=10.1111/j.1468-5930.2007.00346.x|jstor=24355087|s2cid=239364893 |issn=0264-3758}} Thomas Simpson and Vincent Müller argue that they may make it easier to record who gave which command.{{Cite journal|url=http://philpapers.org/rec/SIMJWA|title=Just war and robots' killings|journal=Philosophical Quarterly|volume=66|issue=263|pages=302–22|date=2016|last=Simpson|first=Thomas W|author2=Müller, Vincent C.|doi=10.1093/pq/pqv075|doi-access=free}} Potential IHL violations by LAWs are – by definition – only applicable in conflict settings that involve the need to distinguish between combatants and civilians. As such, any conflict scenario devoid of civilians' presence – i.e. in space or the deep seas – would not run into the obstacles posed by IHL.Boulanin et al., "Limits on Autonomy in Weapon Systems", SIPRI & ICRC (2020): 37.
= Campaigns to ban LAWs =
File:Rally against SFPD killer robots 20221205-181712605.jpg, protesting against a vote to authorize police use of deadly force robots]]
The possibility of LAWs has generated significant debate, especially about the risk of "killer robots" roaming the earth - in the near or far future. The group Campaign to Stop Killer Robots formed in 2013. In July 2015, over 1,000 experts in artificial intelligence signed a letter warning of the threat of an artificial intelligence arms race and calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn and Google DeepMind co-founder Demis Hassabis, among others.{{cite web|title = Musk, Hawking Warn of Artificial Intelligence Weapons|url = https://blogs.wsj.com/digits/2015/07/27/musk-hawking-warn-of-artificial-intelligence-weapons/|website = WSJ Blogs - Digits|date = 2015-07-27 |first=Cat |last=Zakrzewski |access-date = 2015-07-28}}{{cite news|title=Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons|url=https://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons|work=The Guardian|access-date=28 July 2015|date=27 July 2015|first=Samuel|last=Gibbs}}
According to PAX For Peace (one of the founding organisations of the Campaign to Stop Killer Robots), fully automated weapons (FAWs) will lower the threshold of going to war as soldiers are removed from the battlefield and the public is distanced from experiencing war, giving politicians and other decision-makers more space in deciding when and how to go to war.{{cite web|title=Deadly Decisions - 8 objections to killer robots|url=https://www.paxvoorvrede.nl/media/files/deadlydecisionsweb.pdf|access-date=2 December 2016|page=10}} They warn that once deployed, FAWs will make democratic control of war more difficult, something that author of Kill Decision (a novel on the topic) and IT specialist Daniel Suarez also warned about: according to him it might recentralize power into very few hands by requiring very few people to go to war.
There are websites{{Clarify|reason=Only one website is referenced, not websites.|date=January 2021}} protesting the development of LAWs by presenting undesirable ramifications if research into the appliance of artificial intelligence to designation of weapons continues. On these websites, news about ethical and legal issues are constantly updated for visitors to recap with recent news about international meetings and research articles concerning LAWs.{{Cite news|url=https://autonomousweapons.org/|title=Front page|date=2017-11-10|work=Ban Lethal Autonomous Weapons|access-date=2018-06-09|language=en-US}}
The Holy See has called for the international community to ban the use of LAWs on several occasions. In November 2018, Archbishop Ivan Jurkovic, the permanent observer of the Holy See to the United Nations, stated that “In order to prevent an arms race and the increase of inequalities and instability, it is an imperative duty to act promptly: now is the time to prevent LAWs from becoming the reality of tomorrow’s warfare.” The Church worries that these weapons systems have the capability to irreversibly alter the nature of warfare, create detachment from human agency and put in question the humanity of societies.{{Cite news|url=https://www.catholicnewsagency.com/news/40009/holy-see-renews-appeal-to-ban-killer-robots|title=Holy See renews appeal to ban killer robots|work=Catholic News Agency |date=November 28, 2018 |access-date=30 November 2018}}
{{As of|2019|March|29}}, the majority of governments represented at a UN meeting to discuss the matter favoured a ban on LAWs.{{cite web |last=Gayle |first=Damien |title=UK, US and Russia among those opposing killer robot ban |url=http://www.theguardian.com/science/2019/mar/29/uk-us-russia-opposing-killer-robot-ban-un-ai |website=The Guardian |date=29 March 2019 |access-date=30 March 2019}} A minority of governments, including those of Australia, Israel, Russia, the UK, and the US, opposed a ban. The United States has stated that autonomous weapons have helped prevent the killing of civilians.{{cite news |title=Should 'killer robots' be banned? |url=https://www.dw.com/en/should-killer-robots-be-banned/a-45237864 |first=Nina |last=Werkhäuser |work=Deutsche Welle (DW) |date=27 August 2018 |access-date=31 December 2021}}
In December 2022, a vote of the San Francisco Board of Supervisors to authorize San Francisco Police Department use of LAWs drew national attention and protests.{{cite news |last1=Silva |first1=Daniella |title=San Francisco vote to allow police use of deadly robots spurs concern and outrage |url=https://www.nbcnews.com/news/us-news/san-francisco-vote-allow-police-use-deadly-robots-spurs-concern-outrag-rcna59841 |access-date=5 December 2022 |work=NBC News |date=2 December 2022}}{{cite news |last1=Holand |first1=Lena |title=Activists push back against SFPD's deadly force robots amid legality issues |url=https://abc7news.com/sf-police-robots-sfpd-killer-robot-deadly-force-vote-on/12529149/ |access-date=5 December 2022 |work=KGO-TV |date=5 December 2022}} The Board reversed this vote in a subsequent meeting.{{cite news |last1=Morris |first1=J.D. |title=S.F. halts 'killer robots' police policy after huge backlash — for now |url=https://www.sfchronicle.com/bayarea/article/S-F-halts-killer-robots-police-policy-17636020.php |access-date=6 December 2022 |work=San Francisco Chronicle |date=6 December 2022}}
= No ban, but regulation =
A third approach focuses on regulating the use of autonomous weapon systems in lieu of a ban.{{cite journal| last=Bento | first=Lucas | title = No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law|url = https://dash.harvard.edu/handle/1/33813394|website = Harvard Scholarship Depository|date = 2017|access-date = 2019-09-14}} Military AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal ('Track II') diplomacy by communities of experts, together with a legal and political verification process.{{cite journal|last=Geist|first=Edward Moore|date=2016-08-15|title=It's already too late to stop the AI arms race—We must manage it instead|journal=Bulletin of the Atomic Scientists|volume=72|issue=5|pages=318–321|doi=10.1080/00963402.2016.1216672|bibcode=2016BuAtS..72e.318G|s2cid=151967826|issn=0096-3402}}{{cite journal|last=Maas|first=Matthijs M.|date=2019-02-06|title=How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons |journal=Contemporary Security Policy |volume=40 |issue=3 |pages=285–311 |doi=10.1080/13523260.2019.1576464 |s2cid=159310223|issn=1352-3260}}{{cite journal|last=Ekelhof|first=Merel|date=2019|title=Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation|journal=Global Policy |volume=10|issue=3|pages=343–348|doi=10.1111/1758-5899.12665|issn=1758-5899|doi-access=free}}{{cite journal|last=Umbrello|first=Steven|date=2021-04-05|title=Coupling levels of abstraction in understanding meaningful human control of autonomous weapons: a two-tiered approach|journal=Ethics and Information Technology|volume=23|issue=3|pages=455–464|language=en|doi=10.1007/s10676-021-09588-w|issn=1572-8439|doi-access=free|hdl=2318/1784315|hdl-access=free}} In 2021, the United States Department of Defense requested a dialogue with the Chinese People's Liberation Army on AI-enabled autonomous weapons but was refused.{{Cite journal |title=One Key Challenge for Diplomacy on AI: China's Military Does Not Want to Talk |url=https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk |website=Center for Strategic and International Studies (CSIS) |date=May 20, 2022 |first=Gregory C. |last=Allen |language=en |access-date=2022-05-20}}
A summit of 60 countries was held in 2023 on the responsible use of AI in the military.{{Cite web |title=Nations agree to curb enthusiasm for military AI before it destroys the world |url=https://www.theregister.com/2023/02/17/military_ai_summit/ |access-date=2023-02-17 |work=The Register |first=Brandon |last=Vigliarolo |date=17 February 2023 |language=en}}
On 22 December 2023, a United Nations General Assembly resolution was adopted to support international discussion regarding concerns about LAWs. The vote was 152 in favor, four against, and 11 abstentions.{{cite news |last1=Pandey |first1=Shashank |title=HRW calls for international treaty to ban ‘killer robots’ |url=https://www.jurist.org/news/2024/01/hrw-calls-for-international-treaty-to-ban-killer-robots/ |access-date=8 January 2024 |work=Jurist |date=4 January 2024}}
See also
References
{{Reflist}}
Further reading
- The Guardian (2023) Video "How killer robots are changing modern warfare{{Cite news |last1=Toussaint-Strauss |first1=Josh |last2=Assaf |first2=Ali |last3=Pierce |first3=Joseph |last4=Baxter |first4=Ryan |date=2023-02-24 |title=How killer robots are changing modern warfare – video |language=en-GB |work=the Guardian |url=https://www.theguardian.com/technology/video/2023/feb/24/how-killer-robots-are-changing-modern-warfare-video |access-date=2023-02-27 |issn=0261-3077}}". 24.2.2023, Josh Toussaint-Strauss Ali Assaf Joseph Pierce Ryan Baxter.
- Heyns, Christof (2013), [https://idsn.org/wp-content/uploads/2015/02/SR_executions1.pdf ‘Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions’], UN General Assembly, Human Rights Council, 23 (3), A/HRC/23/47.
- Krishnan, Armin (2009), [https://books.google.com/books?id=UNkFDAAAQBAJ Killer robots: Legality and ethicality of autonomous weapons] (Aldershot: Ashgate)
- Müller, Vincent C. (2016), [http://philpapers.org/rec/MLLAKR ‘Autonomous killer robots are probably good news’], in Ezio Di Nucci and Filippo Santoni de Sio (eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons, 67-81 (London: Ashgate).
- {{cite book |last1=Saxon |first1=Dan |title=Fighting Machines: Autonomous Weapons and Human Dignity |date=2022 |publisher=University of Pennsylvania Press |isbn=978-0-8122-9818-5 |language=en}}
- Sharkey, Noel E (2012), ‘[https://search.informit.org/doi/abs/10.3316/ielapa.034548084727579 Automating Warfare: lessons learned from the drones]’, Journal of Law, Information & Science, 21 (2).
- Simpson, Thomas W and Müller, Vincent C. (2016), [http://philpapers.org/rec/SIMJWA 'Just war and robots’ killings'], The Philosophical Quarterly 66 (263), 302–22.
- Singer, Peter (2009), Wired for war: The robotics revolution and conflict in the 21st Century (New York: Penguin)
- US Department of Defense (2012), ‘[https://www.hsdl.org/?abstract&did=726163#:~:text=Department%20of%20Defense%20Directive%203000.09,%5Bopen%20pdf%20%2D%20191%20KB%20%5D&text=Establishes%20guidelines%20designed%20to%20minimize,could%20lead%20to%20unintended%20engagements.%22 Directive 3000.09, Autonomy in weapon systems]’. <2014 Killer Robots Policy Paper Final.docx>.
- US Department of Defense (2013), ‘Unmanned Systems Integrated Road Map FY2013-2038’. <[https://web.archive.org/web/20140102102556/http://www.defense.gov/pubs/DOD-USRM-2013.pdf].
- The Ethics of Autonomous Weapons Systems (2014) [https://www.law.upenn.edu/institutes/cerl/conferences/ethicsofweapons/schedule-required-readings.php Seminar at UPenn] {{Webarchive|url=https://web.archive.org/web/20201025031502/https://www.law.upenn.edu/institutes/cerl/conferences/ethicsofweapons/schedule-required-readings.php |date=2020-10-25 }}