Artificial intelligence in politics

{{essay|date=June 2025}}

The increasing adoption and development of Artificial Intelligence (AI) technologies are having a significant and multifaceted impact on the political sphere.{{Cite web |last=Csernatoni |first=Raluca |date=18 December 2024 |title=Can Democracy Survive the Disruptive Power of AI? |url=https://carnegieendowment.org/research/2024/12/can-democracy-survive-the-disruptive-power-of-ai?lang=en |access-date=1 June 2025 |website=Carnegie Endowment for International Peace}}{{Cite journal |last=Kouroupis |first=Konstantinos |date=2023 |title=AI and politics: ensuring or threatening democracy? |url=https://ideas.repec.org//a/asr/journl/v13y2023i4p575-587.html |journal=Juridical Tribune - Review of Comparative and International Law |language=en |volume=13 |issue=4 |pages=575–587}} AI is viewed both as a fundamental pillar for modernizing political processes and as a potential threat to democratic integrity and stability. Artificial intelligence has been making its impact on politics in many ways but some key places are in Elections, public trust in politics, and in political policy.

History

= Computational foundations (1950s-1970s) =

The first involvement of artificial intelligence in politics occurred during a live CBS broadcast on November 4, 1952, when Remington Rand's UNIVAC I computer predicted that Eisenhower would win 438 electoral votes to Stevenson's 93 after analyzing 3 million votes. The final result was 442 to 89, less than 1% error.{{Cite news |last=Henn |first=Steve |date=2012-10-31 |title=The Night A Computer Predicted The Next President |url=https://www.npr.org/sections/alltechconsidered/2012/10/31/163951263/the-night-a-computer-predicted-the-next-president |access-date=2025-06-18 |work=NPR |language=en}}

During the 1950s political science became an independent discipline, with Ithiel de Sola Pool coining social network theory and developing methodologies that would influence the field for decades. His collaboration with Robert Abelson at Yale produced the first systematic computer simulations of electoral behavior, creating mathematical models that could predict voter responses to different campaign strategies.Pool, Ithiel De Sola; Abelson, Robert (1961). "The Simulmatics Project". The Public Opinion Quarterly. 25 (2): 167–183. doi:10.1086/267012. ISSN 0033-362X. JSTOR 2746702.

In 1959, Ed Greenfield founded the private U.S. data science firm Simulmatics Corporation with Pool as head of research. Simulmatics developed "The People Machine"—an IBM 704 computer system using FORTRAN programming to analyze voter behavior through sophisticated demographic modeling.{{Cite book |last=Lepore |first=Jill |title=If Then: How the Simulmatics Corporation Invented the Future |date=2020 |publisher=Liveright Publishing Corporation |isbn=978-1-63149-611-0 |edition=1st |location=}}

The 1960 presidential election marked the first systematic deployment of artificial intelligence to influence a major campaign outcome. Simulmatics divided American voters into 480 distinct demographic categories, analyzing archived interviews from 130,000 respondents to predict how different groups would respond to specific messages and policy positions. The computer analysis concluded that Kennedy could win despite anti-Catholic sentiment and that supporting civil rights would ultimately benefit the campaign by mobilizing Black voters.

The 1960s witnessed rapid expansion of academic research in computational politics. Harold Guetzkow published "Simulation in International Relations" in 1963, extending computer modeling to foreign policy analysis.Alvarez, R. Michael , and Kim, Seo-young Silvia . "[https://www.oxfordbibliographies.com/view/document/obo-9780199756223/obo-9780199756223-0285.xml Computational Social Science]". In obo in Political Science. 17 Jun. 2025. In 1965, Pool, Abelson, and Samuel Popkin published their seminal work Candidates, Issues, and Strategies: A Computer Simulation of the 1960 and 1964 Presidential Elections, providing the first comprehensive documentation of electoral simulation methodologies.{{Cite book |last=Pool |first=Ithiel de Sola |url=http://archive.org/details/candidatesissues0000pool |title=Candidates, issues and strategies; a computer simulation of the 1960 and 1964 presidential elections |date=1965 |publisher=Cambridge : M.I.T. Press |others=Internet Archive |isbn=978-0-262-66003-7}}

The Pentagon's 1966-1968 contract with Simulmatics to analyze Vietnamese civilian attitudes and develop propaganda strategies provides an early example of AI's limitations in political contexts. The project failed due to cultural barriers and oversimplified human behavior modeling, leading to the company's bankruptcy in 1970.Rhode, J (2011). "The last stand of the psychocultural Cold warriors: Military contract research in Vietnam". Journal of the History of the Behavioral Sciences (47): 232–250.

= Database-driven campaigns in the Reagan Era (1980s-1990s) =

The 1980s database revolution changed political campaigning by enabling sophisticated voter file management and demographic targeting.{{Cite web |last=Quickbase |title=A Timeline of Database History |url=https://www.quickbase.com/articles/timeline-of-database-history |access-date=2025-06-18 |website=Quickbase |language=en-US}}

Richard Viguerie pioneered computerized direct mail political fundraising, creating extensive conservative donor databases that established the template for data-driven political targeting.Nick Thimmesch (June 9, 1975). "The Grass-Roots Dollar Chase - Ready on the Right". New York Magazine. Archived from the original on January 20, 2023. The Reagan campaigns of 1980 and 1984 utilized early computerized voter file management systems, representing the first systematic use of databases for voter contact and fundraising at scale.{{Cite web |date=2024-05-16 |title=RONALD REAGAN 1980 PRESIDENTIAL CAMPAIGN PAPERS, 1964-1980 {{!}} Ronald Reagan |url=https://www.reaganlibrary.gov/research/finding-aids/ronald-reagan-1980-presidential-campaign-papers-1964-1980-8 |access-date=2025-06-18 |website=www.reaganlibrary.gov |language=en}}

Technological breakthroughs in relational database management systems enabled this transformation. E.F. Codd's 1970 relational database model paper provided the theoretical foundation, while Oracle's first commercial SQL database in 1979 and IBM's DB2 system democratized data processing capabilities.John Foley. "[https://www.oracle.com/database/50-years-relational-database/ Secrets to 50 years of relational database success: innovation, evolution]." Oracle Connect. February 19, 2024. The 1982 IBM PC introduction made database technology accessible to local campaigns, enabling sophisticated voter file management, demographic modeling, and direct mail targeting across different organizational levels.{{Cite web |last=Lavine |first=Scott |date=2018-04-02 |title=The 1980s |url=https://www.computer.org/about/cs-history/1980s/ |access-date=2025-06-18 |website=IEEE Computer Society |language=en-US}}

Political applications expanded rapidly during this period. Campaigns developed complex voter classification systems that could automatically categorize likely supporters and predict issue preferences based on demographic data. Early microtargeting emerged in California in 1992, using nearest neighbor algorithms and decision trees to personalize political messaging.Chad Vander Veen, "Zeroing In," www.govtech.net, Jan 2, 2006 Archived 2006-10-14 at the Wayback Machine. This represented a crucial evolution from broadcast messaging to targeted communication strategies.

The 1990s witnessed the emergence of professional political data firms offering computerized voter file management, demographic targeting, and direct mail services. During this period, the Republican party developed the "Voter Vault" system (now the GOP Data Center).{{Cite web |title=GOP Datacenter (fka Voter Vault) |url=https://www.filpac.com/votervault.htm |access-date=2025-06-18 |website=www.filpac.com}}{{Cite web |title=Parties use huge databases to get personal with voters {{!}} The Seattle Times |url=https://archive.seattletimes.com/archive/20030615/demzilla15/parties-use-huge-databases-to-get-personal-with-voters |access-date=2025-06-18 |website=archive.seattletimes.com}}

= Internet-era digital campaigns (2000s-) =

The transition to internet-based political engagement began in 1996 when Bill Clinton and Bob Dole launched the first presidential campaigns to utilize online platforms, though early internet campaigns had limited impact due to technological constraints and unfamiliarity with effective digital strategies.{{Cite news |last=Shields |first=Mike |date=2016-02-18 |title=An Oral History of The First Presidential Campaign Websites in 1996 |url=http://www.wsj.com/articles/an-oral-history-of-the-first-presidential-campaign-websites-in-1996-1455831487 |access-date=2025-06-18 |work=Wall Street Journal |language=en-US |issn=0099-9660}}

Howard Dean's 2004 campaign revolutionized political organizing by pioneering internet-enabled grassroots mobilization, utilizing meetups and blog-based campaigning to build unprecedented online communities.{{Cite web |date=2005-04-06 |title=The Dean Activists: Their Profile and Prospects |url=https://www.pewresearch.org/politics/2005/04/06/the-dean-activists-their-profile-and-prospects/ |access-date=2025-06-18 |website=Pew Research Center |language=en-US}}

Though Dean failed to win the Democratic nomination, his digital strategies became the foundation for future campaigns. The campaign demonstrated that internet connectivity could transform political participation from passive consumption to active engagement.{{Cite web |title=The father of all Web campaigns |url=https://www.politico.com/story/2012/09/how-deans-wh-bid-gave-birth-to-web-campaigning-081834 |access-date=2025-06-18 |website=POLITICO}}

Natural language processing capabilities evolved significantly during the internet era. Statistical NLP methods and n-gram analysis enabled automated analysis of political texts, while topic modeling allowed systematic examination of political manifestos and speeches.{{Cite web |title=Natural Language Processing Examples in Government Data |url=https://www.deloitte.com/us/en/insights/topics/emerging-technologies/natural-language-processing-examples-in-government-data.html |access-date=2025-06-18 |website=Deloitte Insights |language=en-us}} These developments laid the groundwork for real-time sentiment analysis and automated content generation that would become central to modern campaigns.

== Social media campaigns (2008-2018) ==

Social media platforms emerged as crucial political communication tools during this period. MySpace, Facebook, and YouTube became primary venues for political engagement, enabling direct candidate-to-voter communication and peer-to-peer political influence.

Barack Obama's 2008 victory represented the first successful integration of online and offline political data. The campaign employed Chris Hughes, Facebook's co-founder, to develop social media strategies that reached American adults as online political users for the first time in electoral history.{{Cite web |title=How Chris Hughes Founded Facebook And Then Got Obama Elected By The Age Of 26 |url=https://www.businessinsider.com/henry-blodget-how-chris-hughes-founded-facebook-and-then-got-barack-obama-elected-2009-5 |access-date=2025-06-18 |website=Business Insider |language=en-US}}Stirland, Sarah Lai (June 2, 2015). "The Obama Campaign: A Great Campaign, Or The Greatest? | WIRED". Wired. Archived from the original on June 2, 2015.{{Cite web |last=staff |first=Pew Research Center: Journalism & Media |date=2012-08-15 |title=How the Presidential Candidates Use the Web and Social Media |url=https://www.pewresearch.org/journalism/2012/08/15/how-presidential-candidates-use-web-and-social-media/ |access-date=2025-06-18 |website=Pew Research Center |language=en-US}} This established voter scoring systems using predictive analytics and created the foundation for modern political data analytics.

Obama's 2012 "Cave" data operation marked the first campaign to fully operationalize machine learning in political targeting. Led by Chief Analytics Officer Dan Wagner, the campaign created a unified database merging voter files, consumer data, and social media information to develop "persuadability scores" predicting individual voter susceptibility to specific messages. The operation employed A/B testing for message optimization and used predictive modeling to identify optimal celebrity endorsements, raising over $1 billion through data-driven fundraising.{{Cite web |title=How Obama's Team Used Big Data to Rally Voters |url=https://www.technologyreview.com/2012/12/19/114510/how-obamas-team-used-big-data-to-rally-voters/ |access-date=2025-06-18 |website=MIT Technology Review |language=en}}{{Cite web |last=Beckett |first=Lois |date=2012-11-29 |title=Everything We Know (So Far) About Obama's Big Data Tactics |url=https://www.propublica.org/article/everything-we-know-so-far-about-obamas-big-data-operation |access-date=2025-06-18 |website=ProPublica}} This campaign marked an evolution from static demographic analysis to dynamic behavioral prediction, demonstrating big data analytics' potential in political contexts and establishing new standards for campaign sophistication.

Cambridge Analytica's emergence in 2013 with $15 million backing from Robert Mercer and strategic guidance from Steve Bannon marked a new phase in political AI development. The company developed Facebook data harvesting capabilities through Aleksandr Kogan's "thisisyourdigitallife" app, ultimately accessing data from 87 million users. This enabled psychographic profiling that could predict and influence political behavior based on personality traits rather than traditional demographic categories.{{Cite news |last=Ingram |first=David |date=2018-03-20 |title=Factbox: Who is Cambridge Analytica and what did it do? |url=https://www.reuters.com/article/technology/factbox-who-is-cambridge-analytica-and-what-did-it-do-idUSKBN1GW07F/ |access-date=2025-06-18 |work=Reuters |language=en-US}}Meredith, Sam (April 10, 2018). "Facebook-Cambridge Analytica: A timeline of the data hijacking scandal". CNBC. Archived from the original on October 19, 2018.Kosinski, M; Stillwell, D; Graepel, T (2013). "Private traits and attributes are predictable from digital records of human behavior". Proceedings of the National Academy of Sciences. 110 (15): 5802–5805. Bibcode:2013PNAS..110.5802K. doi:10.1073/pnas.1218772110. PMC 3625324. PMID 23479631.

The 2016 Trump campaign deployed sophisticated behavioral analytics through Cambridge Analytica's psychographic targeting system, categorizing voters into eight distinct groups including a "Deterrence" category designed to suppress turnout among likely Clinton supporters.{{Cite web |last=Team |first=Channel 4 News Investigations |date=2020-09-28 |title=Revealed: Trump campaign strategy to deter millions of Black Americans from voting in 2016 |url=https://www.channel4.com/news/revealed-trump-campaign-strategy-to-deter-millions-of-black-americans-from-voting-in-2016 |access-date=2025-06-18 |website=Channel 4 News |language=en-GB}} The campaign integrated social media manipulation with automated content generation, demonstrating AI's potential for political influence at unprecedented scale and precision.

The Cambridge Analytica scandal that broke in March 2018 exposed the extent of AI-powered political manipulation and data harvesting, triggering global conversations about digital privacy and democratic integrity. The revelations demonstrated how advanced AI techniques could be weaponized for political purposes, leading to increased regulatory scrutiny and public awareness of AI's political implications.Chan, Rosalie. "[https://www.businessinsider.com/cambridge-analytica-whistleblower-christopher-wylie-facebook-data-2019-10 The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections]". Business Insider. Archived from the original on January 29, 2021.{{Cite journal |last=Hu |first=Margaret |date=July 2020 |title=Cambridge Analytica's black box |url=https://journals.sagepub.com/doi/10.1177/2053951720938091 |journal=Big Data & Society |language=en |volume=7 |issue=2 |doi=10.1177/2053951720938091 |issn=2053-9517}}

== Generative AI campaigns (2024-) ==

The emergence of generative AI systems like ChatGPT in 2022 accelerated both capabilities and concerns about AI's political impact.{{Cite web |title=CANDIDATE AI: THE IMPACT OF ARTIFICIAL INTELLIGENCE ON ELECTIONS |url=https://news.emory.edu/features/2024/09/emag_ai_elections_25-09-2024/index.html |access-date=2025-06-18 |website=news.emory.edu |language=en}}European Parliament Think Tank. "[https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)751478 Artificial intelligence, democracy and elections.]" (Briefing). www.europarl.europa.eu 19-09-2023{{Cite web |last=Hu |first=Charlotte |title=How AI Bots Could Sabotage 2024 Elections around the World |url=https://www.scientificamerican.com/article/how-ai-bots-could-sabotage-2024-elections-around-the-world/ |access-date=2025-06-18 |website=Scientific American |language=en}} During 2024, election cycles around the world saw widespread use of AI for campaign content creation, voter targeting, and real-time sentiment analysis.{{Cite web |date=2024-05-29 |title=AI and the 2024 Elections |url=https://ash.harvard.edu/articles/ai-and-the-2024-elections/ |access-date=2025-06-18 |website=Ash Center |language=en-US}}{{Cite web |date=2025-05-06 |title=How technology is reshaping political campaigns |url=https://erc.europa.eu/projects-statistics/science-stories/how-technology-reshaping-political-campaigns |access-date=2025-06-18 |website=ERC |language=en}}{{Cite web |date=2024-12-05 |title=The Role of AI in the 2024 Elections |url=https://ash.harvard.edu/resources/the-role-of-ai-in-the-2024-elections/ |access-date=2025-06-18 |website=Ash Center |language=en-US}} Twenty major tech companies pledged to combat AI misuse in elections,{{Cite web |title=How Artificial Intelligence Influences Elections and What We Can Do About It |url=https://campaignlegal.org/update/how-artificial-intelligence-influences-elections-and-what-we-can-do-about-it |access-date=2025-06-18 |website=Campaign Legal Center |language=en}} reflecting industry recognition of the technology's potential for democratic harm.{{Cite web |title=The impact of generative AI in a global election year |url=https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year/ |access-date=2025-06-18 |website=Brookings |language=en-US}}{{Cite news |last=Bond |first=Shannon |date=2024-12-21 |title=How AI deepfakes polluted elections in 2024 |url=https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections |access-date=2025-06-18 |work=NPR |language=en}}

Potential benefits of AI in politics

Artificial intelligence is increasingly utilized in the political sphere, with some people saying it's offering various potential benefits to democratic processes. AI tools can facilitate improved communication between citizens and public administration. Some say that technologies present an opportunity to enhance the democratic process, enabling citizens to gain a better understanding of political issues and participate more easily in democratic discourse. Politicians have been utilizing AI to promote strategies and foster closer communication with citizens, potentially increasing democratic participation and educating the public on policy matters. For example, the Danish Synthetic Party is led by an AI responsible for its political program, and Denmark's Prime Minister Mette Frederiksen used Chat GPT in a parliamentary speech to highlight AI's potential. Supporters of Artificial Intelligence have said that AI applications like chatbots or learning machine tools can foster a more direct and persuasive contact with people, educate citizens on democratic principles and policy matters, and motivate them to express their opinions to governments and politicians The integration of AI can also make political campaigns more efficient and cost-effective, allowing for quick execution and the ability to capture citizen queries and predict their needs for more targeted engagement.

Challenges and dangers of AI in politics

= AI in Elections =

{{See also|Artificial intelligence and elections}}

Artificial intelligence is increasingly impacting elections globally, with growing concerns that powerful generative AI systems and deepfakes will destabilize democracies. These technologies make it easy for anyone with a smartphone and a imagination to create fake, yet convincing, content aimed at fooling voters.Swenson, Ali; Chan, Kelvin (March 14, 2024). "[https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd Election disinformation takes a big leap with AI being used to deceive worldwide]". AP. Archived from the original on March 18, 2024. Retrieved November 18, 2024. AI deepfakes tied to elections in Europe and Asia have spread through social media throughout 2025, serving as a warning for future elections in other nations. Recent examples include AI-generated audio recordings of Slovakia's liberal party leader discussing vote rigging and raising beer prices, a video of Moldova's pro-Western president throwing support behind a Russian-friendly party, and a robocall impersonating U.S. President Joe Biden urging voters to abstain from a primary election As the public becomes more aware that video and audio can be convincingly faked, some may exploit this by denouncing authentic media as deepfakes.File:TrumporBiden2024.png channel TrumporBiden2024]]

The deployment of AI in the political area falls into a high-risk category due to its potential problems. AI tools, when deployed on social media, can generate misleading content at a speed and scale that outpaces governmental oversight and society's ability to manage the consequences. Some nations, including Russia, Iran, and China, have leveraged AI in their influence operations to tailor polarizing content and spread synthetic media.{{Cite web |last=Bond |first=Shannon |date=September 23, 2024 |title=U.S. officials say Russia is embracing AI for its election influence efforts |url=https://www.npr.org/2024/09/23/nx-s1-5123927/russia-artificial-intelligence-election |website=NPR}} Authorities worldwide are trying to establish guardrails, with efforts including banning AI-generated voices in robocalls in the U.S, major tech companies signing a pact to prevent AI from disrupting elections, and the EU's AI Act imposing obligations for transparency, detection, and tracing of AI-generated material. Many states in the U.S. have introduced legislation requiring disclosure of AI use in election content. However, enforcing regulations is a significant hurdle, given that deepfakes are challenging to detect and source, and the technology is rapidly advancing. A comprehensive, multifaceted approach combining regulatory tools, technical solutions like watermarking and detection software, and public digital literacy initiatives is considered crucial to safeguard democratic elections

== AI influence and public trust in politics ==

Artificial intelligence (AI) profoundly impacts public trust in politics by introducing significant risks. The use of AI in politics raises seriousFile:03 Trump Gaza genocide friends.png]]ethical and legal concerns. AI tools can process massive amounts of data to analyze user trends and behaviors, enabling highly targeted and persuasive campaign messages that can manipulate public opinion and damage the direct, original dimension of political communication. This phenomenon can lead to widespread deception and damage public trust in democratic institutions, as seen with AI-generated attack videos in political campaigns. The lack of a uniform and binding regulatory framework for AI further exacerbates concerns about privacy and security, and raises questions about accountability for false or biased outcomes produced by AI systems.

Furthermore, AI systems are not neutral; they are embedded in social, political, cultural, and economic structures and designed to benefit existing dominant interests, often amplifying hierarchies and encoding narrow classifications.{{Cite book |last=Crawford |first=Kate |url=https://en.wikipedia.org/wiki/Atlas_of_AI |title=Atlas of AI |date=2021 |publisher=Yale University Press}} This means that AI systems can reproduce and intensify existing structural inequalities, particularly when used in sensitive areas like the justice system or welfare distribution. AI development often obscures its material and human costs, including energy consumption, labor exploitation, and mass data harvesting, further distancing the public from understanding its true impact. Despite the proliferation of AI ethics frameworks, many lack representation from the communities most affected, are often unenforceable, and may prioritize profit over ethical concerns, leading to a persistent asymmetry of power where technical systems extend inequality. This dynamic makes it challenging to build trust, as the public struggles to discern truth from AI-generated misinformation and holds those responsible for AI's negative consequences accountable.

Policy regulations and AI

{{See also|Regulation of artificial intelligence}}

To address the challenges posed by generative AI to democratic processes, many countries have taken a multifaceted approach. Many US states have created policies specifically targeting AI use in elections. The National Conference of State Legislatures has compiled a list of legislation regarding AI use by state as of 2024, some carrying both Criminal and Civil penalties.[https://www.ncsl.org/elections-and-campaigns/artificial-intelligence-ai-in-elections-and-campaigns "National Council of State Legislatures".] Artificial Intelligence (AI) in Elections and Campaigns. October 7, 2024. Retrieved October 21, 2024. Critics of AI believe that regulatory and governance tools targeting deepfakes, AI-generated disinformation, and foreign interference are imperative. Some people believe that relying on self-regulation by tech companies is insufficient and that governments must enact robust policies to mitigate the creation and proliferation of synthetic content and hold corporations legally and financially accountable. Policymakers are considering AI content watermarking, though it faces technical challenges, and without robust legislation, companies are unlikely to prioritize such tools. Broader, harmonized standards across jurisdictions may be necessary for effective multilateral governance. The G7 has called on companies to develop reliable mechanisms, and the EU's AI Act imposes obligations for transparency, detection, and tracing of AI-generated material. Other interventions like legislation targeting election-specific deepfakes, technological solutions, and voter education initiatives will need to be discussed in the future.

Lawmakers across states have introduced legislation to combat election-related AI-generated disinformation, often requiring disclosure of AI use for election-related content within specific time frames before elections{{Cite web |last=Mirza |first=Rehan |date=16 February 2024 |title=How AI deepfakes threaten the 2024 elections |url=https://journalistsresource.org/home/how-ai-deepfakes-threaten-the-2024-elections/ |access-date=1 June 2025 |website=The Journalist's Resource}} However, the introduction of these bills does not guarantee they will become law, and their enforceability could be challenged on free speech grounds. Penalties might only occur after the fact or be evaded by foreign entities.

Some social media companies have attempted to limit the spread of false content. Their primary response is often to label content as ‘AI-generated’. This puts the onus on users to recognize labels that are not yet fully rolled out, and AI content may evade detection. Labeling policies often do not specify whether a piece of content is harmful, only that it is AI-generated.

Other strategies involve developing and enforcing responsible platform design and moderation, legal mandates, and calling for journalists and the public to hold the platforms accountable. There is not yet a uniform and binding regulatory framework governing AI. The European Commission has proposed an AI Regulation setting out how AI systems can be introduced and used in the EU, designating AI systems for democratic processes as high-risk and proposing mandatory requirements.{{Cite web |last=European Union |date=6 August 2023 |title=EU AI Act: first regulation on artificial intelligence |url=https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence}}

References