Artificial intelligence rhetoric
{{short description|Persuasive text and speech created by artificial intelligence}}
{{Use dmy dates|date=November 2024}}
File:UK national football team considering compete in UEFA Euro and FIFA World Cup – ChatGPT.jpg presenting arguments for and against a potential rule change to a soccer tournament]]
Artificial intelligence rhetoric (or AI rhetoric) is a term primarily applied to persuasive text and speech generated by chatbots using generative artificial intelligence, although the term can also apply to the language that humans type or speak when communicating with a chatbot. This emerging field of rhetoric scholarship is related to the fields of digital rhetoric and human-computer interaction.
Description
Persuasive text and persuasive digital speech can be examined as AI rhetoric when the text or speech is a product or output of advanced machines that mimic human communication in some way. Historical examples of fictional artificial intelligence capable of speech are portrayed in mythology, folk tales, and science fiction.{{cite book |last1=Dobrin |first1=Sidney I. |title=AI and Writing |date=2023 |publisher=Broadview Press |location=Peterborough, Ontario, Canada |isbn=9781554816514 |page=16 }} Modern computer technology from the mid-20th century began producing what can be studied as real-world examples of AI rhetoric with programs like Joseph Weizenbaum's ELIZA, while chatbot development in the 1990s further enhanced a foundation for texts produced by generative AI programs of the 21st century.{{cite news |last1=Tarnoff |first1=Ben |title=Weizenbaum's nightmares: how the inventor of the first chatbot turned against AI |url=https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai |access-date=19 November 2024 |work=The Guardian |date=25 July 2023 }}
From an additional perspective, AI rhetoric may be understood as the natural language humans use, either typewritten or spoken, to prompt and direct AI technologies in persuasive ways (as opposed to traditional computer coding). This is closely related to the concepts of prompt engineering and prompt hacking.{{cite thesis |last1=Foley |first1=Christopher |date=2024 |title=Prompt Engineering: Toward a Rhetoric and Poetics for Neural Network Augmented Authorship in Composition and Rhetoric |publisher=University of Central Florida |url=https://stars.library.ucf.edu/etd2023/135/ |degree=Ph.D. |access-date=19 November 2024 }}
History
While much of the research related to artificial intelligence was historically conducted by computer scientists, experts across a wide range of subjects (such as cognitive science, philosophy, languages, and cultural studies) have contributed to a more robust understanding of AI for decades.{{cite magazine |last1=Lim |first1=Elvin |last2=Chase |first2=Jonathan |title=Interdisciplinarity is a core part of AI's heritage and is entwined with its future |magazine=Times Higher Education |date=8 November 2023 |url=https://www.timeshighereducation.com/campus/interdisciplinarity-core-part-ais-heritage-and-entwined-its-future |access-date=6 November 2024 }} The advent of 21st-century AI technologies like ChatGPT generated a swell of interest from the arts and humanities. Generative AI technology and chatbots gained notoriety and rapid widespread use in the 2020s.{{cite news |last1=Hu |first1=Krystal |title=ChatGPT sets record for fastest-growing user base |url=https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ |access-date=6 November 2024 |work=Reuters |date=2 February 2023 }}
Questions and theories about the power of machines, computers, and robots to persuasively communicate date back to the very beginnings of computer development, more than a decade before the first computer language programs were created and tested. In 1950, Alan Turing imagined a scenario called the imitation game where a machine using only typewritten communication might be successfully programmed to fool a human reader into believing the machine's responses came from a person.{{cite journal |last1=Turing |first1=A. M. |title=Computing Machinery And Intelligence |url=https://academic.oup.com/mind/article/LIX/236/433/986238 |journal=Mind |date=1 October 1950 |volume=LIX |issue=236 |pages=433–460 |doi=10.1093/mind/LIX.236.433 |access-date=6 November 2024 |url-access=subscription }} By the 1960s, computer programs using basic natural language processing, such as Joseph Weizenbaum's ELIZA, began to pass Turing's test as human research subjects reading the machine's outputs became "very hard to convince that ELIZA is not human."{{cite journal |last1=Weizenbaum |first1=Joseph |title=ELIZA—a computer program for the study of natural language communication between man and machine |journal=Communications of the ACM |date=1 January 1966 |volume=9 |issue=1 |pages=36–45 |doi=10.1145/365153.365168 |url=https://dl.acm.org/doi/10.1145/365153.365168 |access-date=6 November 2024 |issn=0001-0782 }} Future computer language programs would build on Weizenbaum's work, but the first generation of internet chatbots in the 1990s up to the virtual assistants of the 2010s (like Apple's Siri and Amazon's Alexa) received harsh criticism for their less-than-humanlike responses and inability to reason in a helpful manner.{{cite magazine |last1=Bove |first1=Tristan |title='They were all dumb as a rock': Microsoft's CEO slams voice assistants like Alexa and his own company's Cortana as A.I. is poised to take over |magazine=Fortune |date=6 March 2023 |url=https://fortune.com/2023/03/06/microsoft-ceo-satya-nadella-virtual-assistants-dumb-as-a-rock-ai-future/ |access-date=6 November 2024 |language=en }}
By the late 1980s and early 1990s, scholars in the humanities began laying the groundwork for AI rhetoric to become a recognized area of study. Michael L. Johnson's Mind, Language, Machine: Artificial Intelligence in the Poststructuralist Age argued for the "interdisciplinary synthesis" necessary to guide an understanding of the relationship between AI and rhetoric.{{cite book |last1=Johnson |first1=Michael L. |title=Mind, language, machine: artificial intelligence in the poststructuralist age |url={{Google books|RNavCwAAQBAJ |plainurl=yes}} |date=1988 |publisher=St. Martin's Press |location=New York |isbn=9780312004064 |access-date=6 November 2024 }} Lynette Hunter, Professor of the History of Rhetoric and Performance at the University of California, Davis, published "Rhetoric and Artificial Intelligence" in 1991, and was among the first to directly apply the lens of rhetoric to AI.{{cite journal |last1=Hunter |first1=Lynette |title=Rhetoric and Artificial Intelligence |url=https://online.ucpress.edu/rhetorica/article-abstract/9/4/317/83096/Rhetoric-and-Artificial-Intelligence |journal=Rhetorica |date=1 November 1991 |volume=9 |issue=4 |pages=317–340 |doi=10.1525/rh.1991.9.4.317 |access-date=6 November 2024 |url-access=subscription }}
Twenty-first century developments in the scholarship of AI rhetoric are outlined in the July 2024 special issue of Rhetoric Society Quarterly, which is devoted to "Rhetoric of/with AI".{{cite journal |title=Rhetoric of/with AI |journal=Rhetoric Society Quarterly |date=2024 |volume=54 |issue=3 |url=https://www.tandfonline.com/toc/rrsq20/54/3 |access-date=6 November 2024 }} Special issue editors S. Scott Graham and Zoltan P. Majdik summarize the state of the field when they write "rhetorical research related to AI engages all manner of specialty domains [...] Because AI now touches on almost all areas of human activity, rhetorics of AI can help contribute to longstanding discussions in rhetoric of science, rhetoric of health and medicine, cultural rhetorics, public address, writing studies, ideological rhetoric, and many other areas. But studies on the rhetoric of AI can also offer many insights to the broader, interdisciplinary study of AI itself."{{rp|223-4}}
Media coverage
Since ChatGPT's release in 2022, many prominent publications have focused on the uncanny persuasive capabilities of language-based generative AI models like chatbots. New York Times technology columnist Kevin Roose wrote a viral piece in 2023 about how a Microsoft AI named Sydney attempted to convince him to leave his wife, and he followed up with a 2024 article explaining "a new world of A.I. manipulation" where users can rely on creative prompt engineering to influence the outputs of generative AI programs.Pratschke, B. Mairéad (2023). [https://link.springer.com/book/10.1007/978-3-031-67991-9 Generative AI and Education: Digital Pedagogies, Teaching Innovation and Learning Design]. Springer. pp. 1, 41-42, 56. {{ISBN|9783031679919}}. {{OCLC|1453752201}} Quote: "When ChatGPT-3.5 was launched in November 2022, it stunned the world of education...It is social, chatty, funny, and helpful but also sometimes unpredictable, lazy, rude, manipulative, and prone to bad behaviour, which ranged from attempting to break down a journalist's marriage (Roose, 2023; Yerushalmy, 2023) ... New York Times tech columnist Kevin Roose also had an exchange with Bing's Sydney (the former code name for what is now Microsoft Copilot), which left him 'deeply disturbed' (Roose, 2023a, 2023b). Roose recounted the conversation in an episode of the Hard Fork podcast he co-hosts, which ended with the bot telling him he loved it and trying to convince him to leave his wife. A year later, Roose wrote a follow-up piece, in which he said that—partly thanks to issues like these—chatbots had been overly tamed by their big tech owners and now lacked the creativity that was necessary to tackle big problems, which he considered a loss (Roose, 2024)".
- Davis, Wes (August 31, 2024). [https://www.theverge.com/2024/8/31/24232985/ai-search-shouldnt-be-this-easy-to-manipulate "AI search 'shouldn't be this easy to manipulate'"]. The Verge. Retrieved December 11, 2024. Quote: "Kevin Roose, whose New York Times story about horny Bing chats went viral last year, writes that chatbots are at times very negative about him since, having seemingly picked up on criticism of his piece. Now, he writes about how he used techniques that could be considered an AI-focused version of SEO to influence how they respond when asked about him—and what that portends."
- {{cite news |last1=Roose |first1=Kevin |title=A Conversation With Bing's Chatbot Left Me Deeply Unsettled |url=https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html |access-date=6 November 2024 |work=The New York Times |date=16 February 2023}}
- {{cite news |last1=Roose |first1=Kevin |title=How Do You Change a Chatbot's Mind? |url=https://www.nytimes.com/2024/08/30/technology/ai-chatbot-chatgpt-manipulation.html |access-date=6 November 2024 |work=The New York Times |date=30 August 2024 }} A February 2024 report cited by the journal Nature claims to "provide the first empirical evidence demonstrating how content generated by artificial intelligence can scale personalized persuasion", with only limited information about the message recipient.{{cite journal |last1=Matz |first1=S. C. |last2=Teeny |first2=J. D. |last3=Vaid |first3=S. S. |last4=Peters |first4=H. |last5=Harari |first5=G. M. |last6=Cerf |first6=M. |title=The potential of generative AI for personalized persuasion at scale |journal=Scientific Reports |date=26 February 2024 |volume=14 |issue=1 |pages=4692 |doi=10.1038/s41598-024-53755-0 |pmid=38409168 |bibcode=2024NatSR..14.4692M |language=en |issn=2045-2322 |pmc=10897294 }} Psychology Today reported on a 2024 study using the attention-grabbing headline, "AI is Becoming More Persuasive Than Humans."{{cite magazine |last1=Mobayed |first1=Tamim |title=AI Is Becoming More Persuasive Than Humans |magazine=Psychology Today |date=25 March 2024 |url=https://www.psychologytoday.com/us/blog/emotional-behavior-behavioral-emotions/202403/ai-is-becoming-more-persuasive-than-humans |access-date=6 November 2024 }}
AI rhetoric in education
In addition to AI's rhetorical capabilities gaining attention in the media in the early 2020s, many colleges and universities began offering undergraduate, graduate, and certificate courses in AI prompting and AI rhetoric, with titles like Stanford's "Rhetoric of artificial intelligence and robots"{{cite web |title=PWR 1SBB: Writing & Rhetoric 1: The Rhetoric of Robots and Artificial Intelligence |url=https://explorecourses.stanford.edu/search?view=catalog&filter-coursestatus-Active=on&page=0&catalog=&q=PWR+1SBB%3A+Writing+%26+Rhetoric+1%3A+The+Rhetoric+of+Robots+and+Artificial+Intelligence&collapse= |website=Stanford Bulletin |publisher=Stanford University |access-date=6 November 2024 }} and the University of Florida's "The Rhetoric of Artificial Intelligence".{{cite web |title=The Rhetoric of Artificial Intelligence |url=https://undergrad.aa.ufl.edu/media/undergradaaufledu/uf-quest/quest-course-materials/quest-1-syllabi/Miller---Fall-2023.pdf |website=University of Florida |access-date=6 November 2024 }} Primary and secondary schools designing and implementing AI literacy curricula also incorporate AI rhetoric concepts into lessons on AI bias and ethical usage of AI.{{cite web |title=K-12 AI curricula: A mapping of government-endorsed AI curricula |url=https://unesdoc.unesco.org/ark:/48223/pf0000380602 |website=UNESDOC Digital Library |publisher=UNESCO |access-date=6 November 2024 |date=2022 }}