Artificial intelligence in Wikimedia projects
{{Short description|none}}
Artificial intelligence is used in Wikipedia and other Wikimedia projects for the purpose of developing those projects.{{cite web |last1=Marr |first1=Bernard |title=The Amazing Ways How Wikipedia Uses Artificial Intelligence |url=https://www.forbes.com/sites/bernardmarr/2018/08/17/the-amazing-ways-how-wikipedia-uses-artificial-intelligence/#7cbdda802b9d |website=Forbes |language=en |date=17 August 2018}}{{cite news |last=Gertner |first=Jon |title=Wikipedia's Moment of Truth - Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process? + comment |url=https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html |date=18 July 2023 |work=The New York Times |url-status=bot: unknown |archiveurl=https://web.archive.org/web/20230718233916/https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html#permid=126389255 |archivedate=18 July 2023 |accessdate=19 July 2023 }} Human and bot interaction in Wikimedia projects is routine and iterative.{{cite arXiv |last1=Piscopo |first1=Alessandro |title=Wikidata: A New Paradigm of Human-Bot Collaboration? |date=1 October 2018 |eprint=1810.00931|class=cs.HC }}
Using artificial intelligence for Wikimedia projects
Various projects seek to improve Wikipedia and Wikimedia projects by using artificial intelligence tools.
=ORES=
The Objective Revision Evaluation Service (ORES) project is an artificial intelligence service for grading the quality of Wikipedia edits.{{cite web |last1=Simonite |first1=Tom |title=Software That Can Spot Rookie Mistakes Could Make Wikipedia More Welcoming |url=https://www.technologyreview.com/s/544036/artificial-intelligence-aims-to-make-wikipedia-friendlier-and-better/ |website=MIT Technology Review |language=en |date=1 December 2015}}{{Cite magazine |last1=Metz |first1=Cade |title=Wikipedia Deploys AI to Expand Its Ranks of Human Editors |url=https://www.wired.com/2015/12/wikipedia-is-using-ai-to-expand-the-ranks-of-human-editors/ |magazine=Wired |date=1 December 2015|archive-url=https://web.archive.org/web/20240402000516/https://www.wired.com/2015/12/wikipedia-is-using-ai-to-expand-the-ranks-of-human-editors/|archive-date=2 Apr 2024}} The Wikimedia Foundation presented the ORES project in November 2015.{{cite web |last1=Halfaker |first1=Aaron |last2=Taraborelli |first2=Dario |title=Artificial intelligence service "ORES" gives Wikipedians X-ray specs to see through bad edits |url=https://wikimediafoundation.org/2015/11/30/artificial-intelligence-x-ray-specs/ |website=Wikimedia Foundation |date=30 November 2015}}
= Wiki bots =
{{Excerpt|Vandalism on Wikipedia|ClueBot NG}}
=Detox=
Detox was a project by Google, in collaboration with the Wikimedia Foundation, to research methods that could be used to address users posting unkind comments in Wikimedia community discussions.{{Cite book |title=Research:Detox - Meta |url=https://meta.wikimedia.org/wiki/Research:Detox |language=en}} Among other parts of the Detox project, the Wikimedia Foundation and Jigsaw collaborated to use artificial intelligence for basic research and to develop technical solutions{{examples needed|date=April 2023}} to address the problem. In October 2016 those organizations published "Ex Machina: Personal Attacks Seen at Scale" describing their findings.{{Cite book |pages=1391–1399 |doi=10.1145/3038912.3052591 |arxiv=1610.08914|year=2017 |last1=Wulczyn |first1=Ellery |last2=Thain |first2=Nithum |last3=Dixon |first3=Lucas |title=Proceedings of the 26th International Conference on World Wide Web |chapter=Ex Machina: Personal Attacks Seen at Scale |isbn=9781450349130 |s2cid=6060248 }}{{cite web |author1=Jigsaw |title=Algorithms And Insults: Scaling Up Our Understanding Of Harassment On Wikipedia |url=https://medium.com/jigsaw/algorithms-and-insults-scaling-up-our-understanding-of-harassment-on-wikipedia-6cc417b9f7ff |website=Medium |date=7 February 2017}} Various popular media outlets reported on the publication of this paper and described the social context of the research.{{cite news |last1=Wakabayashi |first1=Daisuke |title=Google Cousin Develops Technology to Flag Toxic Online Comments |url=https://www.nytimes.com/2017/02/23/technology/google-jigsaw-monitor-toxic-online-comments.html |journal=The New York Times |language=en |date=23 February 2017}}{{cite web |last1=Smellie |first1=Sarah |title=Inside Wikipedia's Attempt to Use Artificial Intelligence to Combat Harassment |url=https://motherboard.vice.com/en_us/article/aeyvxz/wikipedia-jigsaw-google-artificial-intelligence |website=Motherboard |publisher=Vice Media |language=en-us |date=17 February 2017}}{{cite web |last1=Gershgorn |first1=Dave |title=Alphabet's hate-fighting AI doesn't understand hate yet |url=https://qz.com/918640/alphabets-hate-fighting-ai-doesnt-understand-hate-yet/ |website=Quartz |date=27 February 2017}}
=Bias reduction=
In August 2018, a company called Primer reported attempting to use artificial intelligence to create Wikipedia articles about women as a way to address gender bias on Wikipedia.{{Cite magazine |last1=Simonite |first1=Tom |title=Using Artificial Intelligence to Fix Wikipedia's Gender Problem |url=https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias-gender-problem/ |magazine=Wired |date=3 August 2018}}{{cite web |last1=Verger |first1=Rob |title=Artificial intelligence can now help write Wikipedia pages for overlooked scientists |url=https://www.popsci.com/artificial-intelligence-scientists-wikipedia |website=Popular Science |language=en |date=7 August 2018}}
File:DeepL machine translation of English Wikipedia example.png is used by contributors.{{cite journal |last1=Costa-jussà |first1=Marta R. |last2=Cross |first2=James |last3=Çelebi |first3=Onur |last4=Elbayad |first4=Maha |last5=Heafield |first5=Kenneth |last6=Heffernan |first6=Kevin |last7=Kalbassi |first7=Elahe |last8=Lam |first8=Janice |last9=Licht |first9=Daniel |last10=Maillard |first10=Jean |last11=Sun |first11=Anna |last12=Wang |first12=Skyler |last13=Wenzek |first13=Guillaume |last14=Youngblood |first14=Al |last15=Akula |first15=Bapi |last16=Barrault |first16=Loic |last17=Gonzalez |first17=Gabriel Mejia |last18=Hansanti |first18=Prangthip |last19=Hoffman |first19=John |last20=Jarrett |first20=Semarley |last21=Sadagopan |first21=Kaushik Ram |last22=Rowe |first22=Dirk |last23=Spruit |first23=Shannon |last24=Tran |first24=Chau |last25=Andrews |first25=Pierre |last26=Ayan |first26=Necip Fazil |last27=Bhosale |first27=Shruti |last28=Edunov |first28=Sergey |last29=Fan |first29=Angela |last30=Gao |first30=Cynthia |last31=Goswami |first31=Vedanuj |last32=Guzmán |first32=Francisco |last33=Koehn |first33=Philipp |last34=Mourachko |first34=Alexandre |last35=Ropers |first35=Christophe |last36=Saleem |first36=Safiyyah |last37=Schwenk |first37=Holger |last38=Wang |first38=Jeff |title=Scaling neural machine translation to 200 languages |journal=Nature |date=June 2024 |volume=630 |issue=8018 |pages=841–846 |doi=10.1038/s41586-024-07335-x |language=en |issn=1476-4687|pmc=11208141 |bibcode=2024Natur.630..841N }}{{cite arXiv |title=Considerations for Multilingual Wikipedia Research |eprint=2204.02483 |last1=Johnson |first1=Isaac |last2=Lescak |first2=Emily |date=2022 |class=cs.CY }}{{cite book |last1=Mamadouh |first1=Virginie |title=Handbook of the Changing World Language Map |date=2020 |publisher=Springer International Publishing |isbn=978-3-030-02438-3 |pages=3773–3799 |chapter-url=https://link.springer.com/referenceworkentry/10.1007/978-3-030-02438-3_200 |language=en |chapter=Wikipedia: Mirror, Microcosm, and Motor of Global Linguistic Diversity|doi=10.1007/978-3-030-02438-3_200 |quote=Some versions have expanded dramatically using machine translation through the work of bots or web robots generating articles by translating them automatically from the other Wikipedias, often the English Wikipedia. […] In any event, the English Wikipedia is different from the others because it clearly serves a global audience, while other versions serve more localized audience, even if the Portuguese, Spanish, and French Wikipedias also serves a public spread across different continents}} More than 40% of Wikipedia's active editors
are in English Wikipedia.{{cite arXiv |title=InfoSync: Information Synchronization across Multilingual Semi-structured Tables |eprint=2307.03313 |last1=Khincha |first1=Siddharth |last2=Jain |first2=Chelsi |last3=Gupta |first3=Vivek |last4=Kataria |first4=Tushar |last5=Zhang |first5=Shuo |date=2023 |class=cs.CL }}]]
=Generative models=
==Text==
In 2022, the public release of ChatGPT inspired more experimentation with AI and writing Wikipedia articles. A debate was sparked about whether and to what extent such large language models are suitable for such purposes in light of their tendency to generate plausible-sounding misinformation, including fake references; to generate prose that is not encyclopedic in tone; and to reproduce biases.{{Cite web |last=Harrison |first=Stephen |date=2023-01-12 |title=Should ChatGPT Be Used to Write Wikipedia Articles? |url=https://slate.com/technology/2023/01/chatgpt-wikipedia-articles.html |access-date=2023-01-13 |website=Slate Magazine |language=en}} {{As of|2023|05}}, a draft Wikipedia policy on ChatGPT and similar large language models (LLMs) recommended that users who are unfamiliar with LLMs should avoid using them due to the aforementioned risks, as well as the potential for libel or copyright infringement.{{cite news |last1=Woodcock |first1=Claire |title=AI Is Tearing Wikipedia Apart |url=https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart |work=Vice |date=2 May 2023 |language=en}}
==Other media==
A WikiProject exists for finding and removing AI-generated text and images, called WikiProject AI Cleanup.{{Cite news |last=Maiberg |first=Emanuel |date=October 9, 2024 |title=The Editors Protecting Wikipedia from AI Hoaxes |url=https://www.404media.co/the-editors-protecting-wikipedia-from-ai-hoaxes/ |access-date=October 9, 2024 |work=404 Media}}
Using Wikimedia projects for artificial intelligence
Content in Wikimedia projects is useful as a dataset in advancing artificial intelligence research and applications. For instance, in the development of the Google's Perspective API that identifies toxic comments in online forums, a dataset containing hundreds of thousands of Wikipedia talk page comments with human-labelled toxicity levels was used.{{Cite news|url=https://www.engadget.com/2017/09/01/google-perspective-comment-ranking-system/|title=Google's comment-ranking system will be a hit with the alt-right|work=Engadget|date=2017-09-01}} Subsets of the Wikipedia corpus are considered the largest well-curated data sets available for AI training.
A 2012 paper reported that more than 1,000 academic articles, including those using artificial intelligence, examine Wikipedia, reuse information from Wikipedia, use technical extensions linked to Wikipedia, or research communication about Wikipedia.{{cite journal |last1=Nielsen |first1=Finn Årup |title=Wikipedia Research and Tools: Review and Comments |journal=SSRN Working Paper Series |date=2012 |doi=10.2139/ssrn.2129874 |language=en |issn=1556-5068}} A 2017 paper described Wikipedia as the mother lode for human-generated text available for machine learning.{{cite journal |last1=Mehdi |first1=Mohamad |last2=Okoli |first2=Chitu |last3=Mesgari |first3=Mostafa |last4=Nielsen |first4=Finn Årup |last5=Lanamäki |first5=Arto |title=Excavating the mother lode of human-generated text: A systematic review of research that uses the wikipedia corpus |journal=Information Processing & Management |volume=53 |issue=2 |pages=505–529 |doi=10.1016/j.ipm.2016.07.003 |date=March 2017|s2cid=217265814 |url=http://urn.fi/urn:nbn:fi-fe202003057304 }}
A 2016 research project called "One Hundred Year Study on Artificial Intelligence" named Wikipedia as a key early project for understanding the interplay between artificial intelligence applications and human engagement.{{cite web |title=AI Research Trends - One Hundred Year Study on Artificial Intelligence (AI100) |url=https://ai100.stanford.edu/2016-report/section-i-what-artificial-intelligence/ai-research-trends |website=ai100.stanford.edu |language=en}}
There is a concern about the lack of attribution to Wikipedia articles in large-language models like ChatGPT.{{cite news |title=Wikipedia's Moment of Truth |url=https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html |access-date=29 November 2024 |work=New York Times}}{{cite news |title=Wikipedia Built the Internet’s Brain. Now Its Leaders Want Credit. |url=https://observer.com/2025/03/wikimedia-foundation-execs-speak-on-ai-scraping-attribution-and-wikipedias-future/ |access-date=2 April 2025 |work=Observer |date=28 March 2025|quote=Attributions, however, remain a sticking point. Citations not only give credit but also help Wikipedia attract new editors and donors. ” If our content is getting sucked into an LLM without attribution or links, that’s a real problem for us in the short term,”}} While Wikipedia's licensing policy lets anyone use its texts, including in modified forms, it does have the condition that credit is given, implying that using its contents in answers by AI models without clarifying the sourcing may violate its terms of use.
See also
{{Commons category|Wikimedia projects and AI}}
{{clear}}
References
{{reflist}}
External links
{{Wikimedia Foundation|state=collapsed}}