weak artificial intelligence

{{Use dmy dates|date=January 2015}}

{{short description|Form of artificial intelligence}}

Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as narrow AI,{{cite web |last1=Dvorsky |first1=George |title=How Much Longer Before Our First AI Catastrophe? |url=https://gizmodo.com/how-much-longer-before-our-first-ai-catastrophe-464043243 |website=Gizmodo |access-date=November 27, 2021 |date=April 1, 2013}}{{cite web |last1=Muehlhauser |first1=Luke |title=Ben Goertzel on AGI as a Field |url=https://intelligence.org/2013/10/18/ben-goertzel/ |website=Machine Intelligence Research Institute |access-date=November 27, 2021 |date=October 18, 2013}}{{cite web |last1=Chalfen |first1=Mike |title=The Challenges Of Building AI Apps |url=https://techcrunch.com/2015/10/15/machine-learning-its-the-hard-problems-that-are-valuable/ |website=TechCrunch |access-date=November 27, 2021 |date=October 15, 2015}} is focused on one narrow task.

Weak AI is contrasted with strong AI, which can be interpreted in various ways:

Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category."{{Cite book |last1=Bartneck |first1=Christoph |url=http://link.springer.com/10.1007/978-3-030-51110-4 |title=An Introduction to Ethics in Robotics and AI |last2=Lütge |first2=Christoph |last3=Wagner |first3=Alan |last4=Welsh |first4=Sean |date=2021 |publisher=Springer International Publishing |isbn=978-3-030-51109-8 |series=SpringerBriefs in Ethics |location=Cham |language=en |doi=10.1007/978-3-030-51110-4 |s2cid=224869294}} Artificial general intelligence is conversely the opposite.

Applications and risks

Some examples of narrow AI are AlphaGo,{{Cite web |last=Edelman |first=Gary Grossman |date=2020-09-03 |title=We're entering the AI twilight zone between narrow and general AI |url=https://venturebeat.com/ai/were-entering-the-ai-twilight-zone-between-narrow-and-general-ai/ |access-date=2024-03-16 |website=VentureBeat |language=en-US}} self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent.{{Cite book |last1=Kuleshov |first1=Andrey |last2=Prokhorov |first2=Sergei |title=2019 International Conference on Artificial Intelligence: Applications and Innovations (IC-AIAI) |chapter=Domain Dependence of Definitions Required to Standardize and Compare Performance Characteristics of Weak AI Systems |date=September 2019 |chapter-url=https://ieeexplore.ieee.org/document/9007318 |location=Belgrade, Serbia |publisher=IEEE |pages=62–623 |doi=10.1109/IC-AIAI48757.2019.00020 |isbn=978-1-7281-4326-2|s2cid=211298012 }} It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways.{{Cite web |last=Staff |first=Bulletin |date=2018-04-23 |title=The promise and peril of military applications of artificial intelligence |url=https://thebulletin.org/2018/04/the-promise-and-peril-of-military-applications-of-artificial-intelligence/ |access-date=2024-10-02 |website=Bulletin of the Atomic Scientists |language=en-US}}

Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased.{{Cite journal |last1=Szocik |first1=Konrad |last2=Jurkowska-Gomułka |first2=Agata |date=2021-12-16 |title=Ethical, Legal and Political Challenges of Artificial Intelligence: Law as a Response to AI-Related Threats and Hopes |url=https://www.tandfonline.com/doi/full/10.1080/02604027.2021.2012876 |journal=World Futures |language=en |pages=1–17 |doi=10.1080/02604027.2021.2012876 |issn=0260-4027 |s2cid=245287612}}

Simple AI programs have already worked their way into our society unnoticed. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples.{{Cite journal |last=Earley |first=Seth |date=2017 |title=The Problem With AI |url=https://ieeexplore.ieee.org/document/8012343 |journal=IT Professional |volume=19 |issue=4 |pages=63–67 |doi=10.1109/MITP.2017.3051331 |s2cid=9382416 |issn=1520-9202}} As much as narrow and relatively general AI is slowly starting to help out societies, they are also starting to hurt them as well. AI had already unfairly put people in jail, discriminated against women in the workplace for hiring, taught some problematic ideas to millions, and even killed people with automatic cars.{{Cite book |last1=Anirudh |first1=Koul |url=https://learning.oreilly.com/library/view/practical-deep-learning/9781492034858/?ar= |title=Practical Deep Learning for Cloud, Mobile, and Edge |last2=Siddha |first2=Ganju |last3=Meher |first3=Kasam |publisher=O'Reilly Media |year=2019 |isbn=9781492034865}} AI might be a powerful tool that can be used for improving lives, but it could also be a dangerous technology with the potential for misuse.

Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends.{{Cite journal |last1=Kaiser |first1=Carolin |last2=Ahuvia |first2=Aaron |last3=Rauschnabel |first3=Philipp A. |last4=Wimble |first4=Matt |date=2020-09-01 |title=Social media monitoring: What can marketers learn from Facebook brand photos? |url=https://www.sciencedirect.com/science/article/pii/S0148296319305429 |journal=Journal of Business Research |language=en |volume=117 |pages=707–717 |doi=10.1016/j.jbusres.2019.09.017 |issn=0148-2963 |s2cid=203444643}} For instance, TikTok's "For You" algorithm can determine user's interests or preferences in less than an hour.{{Cite journal |last=Hyunjin |first=Kang |date=September 2022 |title=AI agency vs. human agency: understanding human-AI interactions on TikTok and their implications for user engagement |url=https://academic.oup.com/jcmc/article/27/5/zmac014/6670985?login=false |access-date=2022-11-08 |journal=Journal of Computer-Mediated Communication|volume=27 |issue=5 |doi=10.1093/jcmc/zmac014 |hdl=10356/165641 |hdl-access=free }} Some other social media AI systems are used to detect bots that may be involved in biased propaganda or other potentially malicious activities.{{Cite journal |last1=Shukla |first1=Rachit |last2=Sinha |first2=Adwitiya |last3=Chaudhary |first3=Ankit |date=28 February 2022 |title=TweezBot: An AI-Driven Online Media Bot Identification Algorithm for Twitter Social Networks |journal=Electronics |language=en |volume=11 |issue=5 |pages=743 |doi=10.3390/electronics11050743 |issn=2079-9292 |doi-access=free}}

Weak AI versus strong AI

John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong".{{Cite arXiv |eprint=2103.15294 |class=cs.AI |first=Bin |last=Liu |title="Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest Value for us? |date=2021-03-28}}

Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling"{{Cite book |last=Lieto |first=Antonio |title=Cognitive Design for Artificial Minds |publisher=Routledge, Taylor & Francis |year=2021 |isbn=9781138207929 |location=London, UK |pages=85}} (as, on the other hand, implied by the strong AI assumption).

See also

  • {{anli|Artificial intelligence}}
  • {{anli|Artificial general intelligence}}
  • {{anli|Deep learning}}
  • {{anli|Expert system}}
  • {{anli|Hardware for artificial intelligence}}
  • {{anli|History of artificial intelligence}}
  • {{anli|Machine learning}}
  • {{anli|Philosophy of artificial intelligence}}
  • {{anli|Synthetic intelligence}}
  • {{anli|Virtual assistant}}
  • {{Anli|Workplace impact of artificial intelligence}}

References