computational propaganda

{{Short description|Propaganda method based on digital technologies}}

Computational propaganda is the use of computational tools (algorithms and automation) to distribute misleading information using social media networks. The advances in digital technologies and social media resulted in enhancement in methods of propaganda.[https://navigator.oii.ox.ac.uk/what-is-comprop/ What is computational propaganda?] It is characterized by automation, scalability, and anonymity.

Autonomous agents (internet bots) can analyze big data collected from social media and Internet of things in order to ensure manipulating public opinion in a targeted way, and what is more, to mimic real people in the social media.SAMUEL C. WOOLLEY PHILIP N. HOWARD, "Political Communication, Computational Propaganda, and Autonomous Agents", International Journal of Communication 10 (2016), 4882–4890 Coordination is an important component that bots help achieve, giving it an amplified reach. Digital technology enhance well-established traditional methods of manipulation with public opinion: appeals to people's emotions and biases circumvent rational thinking and promote specific ideas."Computational propaganda: Concepts, methods, and challenges", (an interview with Philip Howard) Communication and the Public, Volume 8, Issue 2, 2023 {{doi|10.1177/2057047323118}}

A pioneering work[https://results2021.ref.ac.uk/impact/5c0a6d23-6451-45ba-9f90-32503f09f824?page=1 Addressing the Harms of Computational Propaganda on Democracy] in identifying and analyzing of the concept has been done by the team of Philip N. Howard at the Oxford Internet Institute who since 2012 have been investigating computational propaganda,[https://www.oii.ox.ac.uk/research/projects/computational-propaganda/ Computational Propaganda / Overview], OII following earlier Howard's research of the effects of social media on general public, published, e.g., in his 2005 book New Media Campaigns and the Managed Citizen and earlier articles. In 2017, they published a series of articles detailing computational propaganda's presence in several countries.{{Cite web |title=DemTech {{!}} Computational Propaganda Worldwide: Executive Summary |url=https://demtech.oii.ox.ac.uk/research/posts/computational-propaganda-worldwide-executive-summary/ |access-date=2025-03-26 |website=demtech.oii.ox.ac.uk |language=en-GB}}

Regulatory efforts have proposed tackling computational propaganda tactics using multiple approaches. Detection techniques are another front considered towards mitigation; these can involve machine learning models, with early techniques having issues such as a lack of datasets or failing against the gradual improvement of accounts. Newer techniques to address these aspects use other machine learning techniques or specialized algorithms, yet other challenges remain such as increasingly believable text and its automation.

Mechanisms

Computational propaganda is the strategic posting on social media of misleading information by fake accounts that are automated to a degree in order to manipulate readers.{{Cite journal |last=Nerino |first=Valentina |date=2021-04-22 |title=Tricked into Supporting: A Study on Computational Propaganda Persuasion Strategies |url=https://italiansociologicalreview.com/ojs/index.php/ISR/article/view/438 |journal=Italian Sociological Review |language=en |volume=11 |issue=4S |pages=343 |doi=10.13136/isr.v11i4S.438 |issn=2239-8589}}

= Bots and coordination =

In social media, bots are accounts pretending to be human.{{Cite journal |last=Apuke |first=O.D. |date=2018 |title=The Role of Social Media and Computational Propaganda in Political Campaign Communication |url=https://sites.google.com/upm.edu.my/jlc-fbmk/regular-issues/vol-5-no-2-september-2018/jlc-07-sept2018?authuser=0 |journal=Journal of Language and Communication |volume=5 |issue=2 |pages=225–250}}{{Cite web |last=Neudert |first=Lisa-Maria |date=2017 |title=Computational Propaganda in Germany: A Cautionary Tale. |url=https://demtech.oii.ox.ac.uk/research/posts/computational-propaganda-in-germany-a-cautionary-tale/ |website=Programme on Democracy & Technology}} They are managed to a degree via programs, and are used to spread information that leads to mistaken impressions.{{Cite journal |last=O'Hara |first=Ian |date=2022-07-01 |title=Automated Epistemology: Bots, Computational Propaganda & Information Literacy Instruction |url=https://linkinghub.elsevier.com/retrieve/pii/S0099133322000568 |journal=The Journal of Academic Librarianship |volume=48 |issue=4 |pages=102540 |doi=10.1016/j.acalib.2022.102540 |issn=0099-1333|url-access=subscription }} In social media, they may be referred to as “social bots”, and may be helped by popular users that amplify them and make them seem reliable through sharing their content. Bots allow propagandists to keep their identities secret. One study from Oxford's Computational Propaganda Research Project indeed found that bots achieved effective placement in Twitter during a political event.{{Cite web |last=Woolley, S.C, & Guilbeault, D.R. |date=2017 |title=Computational Propaganda in the United States of America: Manufacturing Consensus Online. |url=https://demtech.oii.ox.ac.uk/research/posts/computational-propaganda-in-the-united-states-of-america-manufacturing-consensus-online/ }}

Bots can be coordinated,{{Cite journal |last1=Sela |first1=Alon |last2=Neter |first2=Omer |last3=Lohr |first3=Václav |last4=Cihelka |first4=Petr |last5=Wang |first5=Fan |last6=Zwilling |first6=Moti |last7=Sabou |first7=John Phillip |last8=Ulman |first8=Miloš |date=2025-01-30 |title=Signals of propaganda—Detecting and estimating political influences in information spread in social networks |journal=PLOS ONE |language=en |volume=20 |issue=1 |pages=e0309688 |doi=10.1371/journal.pone.0309688 |doi-access=free |issn=1932-6203 |pmc=11781619 |pmid=39883667|bibcode=2025PLoSO..2009688S }} which may be leveraged to make use of algorithms. Propagandists mix real and fake users; their efforts make use of a variety of actors, including botnets, online paid users, astroturfers, seminar users, and troll armies.{{Cite book |last1=Almotairy |first1=Bodor Moheel |last2=Abdullah |first2=Manal |last3=Alahmadi |first3=Dimah |chapter=Detection of Computational Propaganda on Social Networks: A Survey |series=Lecture Notes in Networks and Systems |date=2023 |volume=739 |editor-last=Arai |editor-first=Kohei |title=Intelligent Computing |chapter-url=https://link.springer.com/chapter/10.1007/978-3-031-37963-5_18 |language=en |location=Cham |publisher=Springer Nature Switzerland |pages=244–263 |doi=10.1007/978-3-031-37963-5_18 |isbn=978-3-031-37963-5}}{{Cite book |last1=Martino |first1=Giovanni Da San |last2=Cresci |first2=Stefano |last3=Barrón-Cedeño |first3=Alberto |last4=Yu |first4=Seunghak |last5=Pietro |first5=Roberto Di |last6=Nakov |first6=Preslav |title=Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence |date=2020-07-09 |chapter=A Survey on Computational Propaganda Detection |chapter-url=https://www.ijcai.org/proceedings/2020/672 |language=en |volume=5 |pages=4826–4832 |doi=10.24963/ijcai.2020/672|isbn=978-0-9992411-6-5 }} Bots can provide a fake sense of prevalence.{{Cite journal |last=Olanipekun |first=Samson Olufemi |date=2025 |title=Computational propaganda and misinformation: AI technologies as tools of media manipulation |url=https://journalwjarr.com/node/366 |journal=World Journal of Advanced Research and Reviews |language=en |volume=25 |issue=1 |pages=911–923 |doi=10.30574/wjarr.2025.25.1.0131 |issn=2581-9615|url-access=subscription }} Bots can also engage in spam and harassment. They are progressively becoming sophisticated, one reason being the improvement of AI. Such development complicates detection for humans and automatized methods alike.

= Problematic information =

The problematic content tactics propagandists employ include disinformation, misinformation, and information shared regardless of veracity. The spread of fake and misleading information seeks to influence public opinion.{{Cite journal |last=Haile |first=Yirgalem A |date=2024-12-22 |title=The theoretical wedding of computational propaganda and information operations: Unraveling digital manipulation in conflict zones |url=https://journals.sagepub.com/doi/10.1177/14614448241302319 |journal=New Media & Society |language=EN |pages=14614448241302319 |doi=10.1177/14614448241302319 |issn=1461-4448|url-access=subscription }} Deepfakes and generative language models are also employed, creating convincing content. The proportion of misleading information is expected to grow, complicating detection.

= Algorithmic influence =

Algorithms are another important element to computational propaganda. Algorithmic curation may influence beliefs through repetition. Algorithms boost and hide content, which propagandists use to their favor. Social media algorithms prioritize user engagement, and to that end their filtering prefers controversy and sensationalism. The algorithmic selection of what is presented can create echo chambers and assert influence.{{Cite journal |last=Gombar |first=Marija |date=2025-03-03 |title=Algorithmic Manipulation and Information Science: Media Theories and Cognitive Warfare in Strategic Communication |url=https://ej-media.org/index.php/media/article/view/41 |journal=European Journal of Communication and Media Studies |language=en |volume=4 |issue=2 |pages=1–11 |doi=10.24018/ejmedia.2025.4.2.41 |issn=2976-7431|doi-access=free }}

One study poses that TikTok’s automated (e.g. the sound page) and interactive (e.g. stitching, duetting, and the content imitation trend) features can also boost misleading information.{{Cite journal |last1=Bösch |first1=Marcus |last2=Divon |first2=Tom |date=2024-09-01 |title=The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine |url=https://journals.sagepub.com/doi/10.1177/14614448241251804 |journal=New Media & Society |language=EN |volume=26 |issue=9 |pages=5081–5106 |doi=10.1177/14614448241251804 |issn=1461-4448|url-access=subscription }} Furthermore, anonymity is kept through deleting the audio's origin.

Multidisciplinary studies

A multidisciplinary approach has been proposed towards combating misinformation, proposing the use of psychology to understand its effectiveness. Some studies have looked at misleading information through the lens of cognitive processes, seeking insight into how humans come to accept it.

Media theories can help understand the complexity of relationships present in computational propaganda and surrounding actors, its effect, and to guide regulation efforts. Agenda-setting theory and framing theory have also been considered for analysis of computational propaganda phenomena, finding these effects present; algorithmic amplification is an instance of the former, which states media's selection and occlusion of topics influences the public's attention. It also states that repetition focuses said attention.

Repetition is a key characteristic of computational propaganda; in social media it can modify beliefs. One study posits that repetition makes topics fresh on the mind, having a similar effect on perceived significance. The Illusory Truth Effect, which states people will believe what is repeated to them over time, has also been suggested to bring into light that computational propaganda may be doing the same.{{Cite journal |last=Murphy, J., Keane, A., & Power, A. |date=2020-06-26 |title=Computational Propaganda: Targeted Advertising and the Perception of Truth |url=https://www.academic-conferences.org/wp-content/uploads/dlm_uploads/2020/07/ECCWS-abstract-booklet-FINAL.pdf#page=96 |journal=European Conference on Cyber Warfare and Security |publisher=Curran Associates Inc. |volume=2020-June |pages=491–500 |doi=10.34190/EWS.20.503 |doi-broken-date=4 May 2025 |isbn=9781912764617}}

Other phenomena have been proposed to be at play in Computational Propaganda tools. One study posits the presence of the megaphone effect, the bandwagon effect, and cascades. Other studies point to the use of content that evokes emotions.{{Cite journal |last1=Kozyreva |first1=Anastasia |last2=Lewandowsky |first2=Stephan |last3=Hertwig |first3=Ralph |date=2020-12-01 |title=Citizens Versus the Internet: Confronting Digital Challenges With Cognitive Tools |journal=Psychological Science in the Public Interest |language=EN |volume=21 |issue=3 |pages=103–156 |doi=10.1177/1529100620946707 |issn=1529-1006 |pmc=7745618 |pmid=33325331}} Another tactic used is suggesting connection between topics by placing them in the same sentence. Incidence of Trust bias, Validation By Intuition Rather Than Evidence, Truth Bias, Confirmation Bias, and Cognitive Dissonance are present as well. Another study points to the occurrence of Negativity Bias and Novelty Bias.

Spread

Bots are used by both private and public parties and have been observed in politics and crises. Its presence has been studied across many countries, with incidence in more than 80 countries. Some studies have found bots to be effective. though another found limited impact. Similarly, algorithmic manipulation has been found to have an effect.

Regulation

Some studies propose a strategy that incorporates multiple approaches towards regulation of the tools used in computational propaganda. Controlling misinformation and its usage in politics through legislation and guidelines; having platforms combat fake accounts and misleading information; and devising psychology-based intervention tactics are some of the possible measures. Information Literacy has also been proposed as an affront to these tools.

However, it has also been reported that some of these approaches have their faults. In Germany, for example, legislation efforts have encountered problems and opposition. In the case of social media, self-regulation is hard to request. These platforms’ measures also may not be enough and put the power of decision on them. Information literacy has its limits as well.

Detection

Computational propaganda detection can focus either on content or on accounts.

= Detecting propaganda content =

Two ways to detect propaganda content include analyzing the text through various means, called “Text Analysis”, and tackling detecting coordination of users, called “Social Network Analysis”. Early techniques to detect coordination involved mostly supervised models such as decision trees, random forests, SVMs and neural networks. These just analyze accounts one by one without modeling coordination. Advanced bots and the difficulty in finding or creating datasets have hindered these detection methods. Modern detection techniques’ strategies include making the model study a large group of accounts considering coordination; creating specialized algorithms for it; and building unsupervised and semi-supervised models.

= Detecting propaganda accounts =

Detecting accounts has a variety of approaches: they either seek to find the author of a piece, use statistical methods, analyze a mix of both text and data beyond it such as account characteristics, or scan user activity tendencies. This second focus also has a Social Network Analysis approach, with a technique that looks at time elements on campaigns alongside features of detected groups.

= Limitations =

Detection techniques are not without their issues. One of them is that actors evolve their coordination techniques and can operate in the time it takes for detection methods to be created, requiring real-time approaches. Other challenges detection faces are that techniques have yet to adapt to different media formats, should integrate explainability, could inform the how and why of a propagandistic document or user, and may face increasingly difficult to detect content and may further be automatized. It is also presented with a lack of datasets, and creating them can involve sensitive user data that requires extensive work to protect them.

References

{{reflist}}

Further reading

  • 2018: Samuel C. Woolley and Philip N. Howard (eds.) Computational Propaganda. Political Parties, Politicians, and Political Manipulation on Social Media
  • 2020: Howard, Philip N., Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives

{{Propaganda}}

Category:Propaganda techniques using information

Category:Internet manipulation and propaganda