synthetic media

{{short description|Artificial production, manipulation, and modification of data and media by automated means}}

{{Distinguish|Chemically defined medium|text=a synthetic growth medium}}

{{Update|date=December 2023}}

{{Use mdy dates|date=October 2023}}

Synthetic media (also known as AI-generated media,{{cite web |last1=Goodstein |first1=Anastasia |title=Will AI Replace Human Creativity? |url=https://www.adlibbing.org/2019/10/07/will-ai-replace-human-creativity/ |website=Adlibbing.org |access-date=30 January 2020}}{{cite web |last1=Waddell |first1=Kaveh |title=Welcome to our new synthetic realities |url=https://www.axios.com/synthetic-realities-fiction-stories-fact-misinformation-ed86ce3b-f1a5-4e7b-ba86-f87a918d962e.html |website=Axios.com |date=14 September 2019 |access-date=30 January 2020 |archive-date=27 October 2021 |archive-url=https://web.archive.org/web/20211027193749/https://www.axios.com/synthetic-realities-fiction-stories-fact-misinformation-ed86ce3b-f1a5-4e7b-ba86-f87a918d962e.html |url-status=live }} media produced by generative AI,{{Cite web|url=https://www.producthunt.com/stories/why-now-is-the-time-to-be-a-maker-in-generative-media|title=Why Now Is The Time to Be a Maker in Generative Media|website=Product Hunt|date=29 October 2019 |access-date=2020-02-15|archive-date=2020-02-15|archive-url=https://web.archive.org/web/20200215071516/https://www.producthunt.com/stories/why-now-is-the-time-to-be-a-maker-in-generative-media|url-status=live}} personalized media, personalized content,{{cite web |last1=Ignatidou |first1=Sophia |title=AI-driven Personalization in Digital Media Political and Societal Implications |url=https://www.chathamhouse.org/sites/default/files/021219%20AI-driven%20Personalization%20in%20Digital%20Media%20final%20WEB.pdf |website=Chatham House |publisher=International Security Department |access-date=30 January 2020 |archive-date=11 December 2019 |archive-url=https://web.archive.org/web/20191211053319/https://www.chathamhouse.org/sites/default/files/021219%20AI-driven%20Personalization%20in%20Digital%20Media%20final%20WEB.pdf |url-status=live }} and colloquially as deepfakes{{cite web |last1=Dirik |first1=Iskender |title=Why it's time to change the conversation around synthetic media |url=https://venturebeat.com/2020/08/12/why-its-time-to-change-the-conversation-around-synthetic-media/ |website=Venture Beat |date=12 August 2020 |access-date=4 October 2020 |archive-date=1 October 2020 |archive-url=https://web.archive.org/web/20201001135155/https://venturebeat.com/2020/08/12/why-its-time-to-change-the-conversation-around-synthetic-media/ |url-status=live }}) is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of producing automated content or producing cultural works (e.g. text, image, sound or video) within a set of human prompted parameters automatically.{{cite web |last1=Vales |first1=Aldana |title=An introduction to synthetic media and journalism |url=https://medium.com/the-wall-street-journal/an-introduction-to-synthetic-media-and-journalism-cbbd70d915cd |website=Medium |date=14 October 2019 |publisher=Wall Street Journal |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130061801/https://medium.com/the-wall-street-journal/an-introduction-to-synthetic-media-and-journalism-cbbd70d915cd |url-status=live }}{{cite web |last1=Rosenbaum |first1=Steven |title=What Is Synthetic Media? |url=https://www.mediapost.com/publications/article/341074/what-is-synthetic-media.html |website=MediaPost |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130061803/https://www.mediapost.com/publications/article/341074/what-is-synthetic-media.html |url-status=live }}{{cite web |title=A 2020 Guide to Synthetic Media |url=https://blog.paperspace.com/2020-guide-to-synthetic-media/ |website=Paperspace Blog |access-date=30 January 2020 |date=2020-01-17 |archive-date=2020-01-30 |archive-url=https://web.archive.org/web/20200130061805/https://blog.paperspace.com/2020-guide-to-synthetic-media/ |url-status=live }}{{Cite journal |last=Berry |first=David M. |date=2025-03-19 |title=Synthetic media and computational capitalism: towards a critical theory of artificial intelligence |journal=AI & Society |language=en |doi=10.1007/s00146-025-02265-2 |issn=1435-5655|doi-access=free }} Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology (and often use "deepfakes" as a euphemism, e.g. "deepfakes for text"{{citation needed|date=August 2023}} for natural-language generation; "deepfakes for voices" for neural voice cloning, etc.){{cite web |last1=Ovadya |first1=Aviv |title=Deepfake Myths: Common Misconceptions About Synthetic Media |url=https://securingdemocracy.gmfus.org/deepfake-myths-common-misconceptions-about-synthetic-media/ |website=Securing Democracy |date=14 June 2019 |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130061802/https://securingdemocracy.gmfus.org/deepfake-myths-common-misconceptions-about-synthetic-media/ |url-status=live }}{{cite web |last1=Pangburn |first1=DJ |title=You've been warned: Full body deepfakes are the next step in AI-based human mimicry |url=https://www.fastcompany.com/90407145/youve-been-warned-full-body-deepfakes-are-the-next-step-in-ai-based-human-mimicry |website=Fast Company |date=21 September 2019 |access-date=30 January 2020 |archive-date=8 November 2019 |archive-url=https://web.archive.org/web/20191108161240/https://www.fastcompany.com/90407145/youve-been-warned-full-body-deepfakes-are-the-next-step-in-ai-based-human-mimicry |url-status=live }} Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses.{{Cite web|url=https://medium.com/the-wall-street-journal/an-introduction-to-synthetic-media-and-journalism-cbbd70d915cd|title=An Introduction to Synthetic Media and Journalism|first=Aldana|last=Vales|date=October 14, 2019|website=Medium|access-date=January 30, 2020|archive-date=January 30, 2020|archive-url=https://web.archive.org/web/20200130061801/https://medium.com/the-wall-street-journal/an-introduction-to-synthetic-media-and-journalism-cbbd70d915cd|url-status=live}}{{Cite web|title=AI-Assisted Fake Porn Is Here and We're All Fucked|url=https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn|access-date=2021-10-17|website=motherboard.vice.com|date=11 December 2017 |language=en|archive-date=2019-09-07|archive-url=https://web.archive.org/web/20190907212225/https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn|url-status=live}} Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds.{{cite web |last1=Pasquarelli |first1=Walter |title=Towards Synthetic Reality: When DeepFakes meet AR/VR |url=https://www.oxfordinsights.com/insights/2019/8/6/towards-synthetic-reality-when-deepfakes-meet-arvr |website=Oxford Insights |date=6 August 2019 |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130165731/https://www.oxfordinsights.com/insights/2019/8/6/towards-synthetic-reality-when-deepfakes-meet-arvr |url-status=live }} Synthetic media is an applied form of artificial imagination.

History

=Pre-1950s=

File:Maillardet's automaton at the Franklin Institute.webm drawing a picture]]

The idea of automated art dates back to the automata of ancient Greek civilization. Nearly 2000 years ago, the engineer Hero of Alexandria described statues that could move and mechanical theatrical devices.{{Citation |last1=Fron |first1=Christian |title=A Short History of the Perception of Robots and Automata from Antiquity to Modern Times |date=2019 |work=Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction |pages=1–12 |editor-last=Korn |editor-first=Oliver |url=https://link.springer.com/chapter/10.1007/978-3-030-17107-0_1 |access-date=2025-03-13 |place=Cham |publisher=Springer International Publishing |language=en |doi=10.1007/978-3-030-17107-0_1 |isbn=978-3-030-17107-0 |last2=Korn |first2=Oliver}} Over the centuries, mechanical artworks drew crowds throughout Europe,{{cite web|url=https://www.youtube.com/watch?v=7YEPhe2Gp0Y|title=A Marvellous Elephant - Waddesdon Manor|last=Waddesdon Manor|date=22 July 2015|via=YouTube|access-date=22 October 2019|archive-date=31 May 2019|archive-url=https://web.archive.org/web/20190531192552/https://www.youtube.com/watch?v=7YEPhe2Gp0Y|url-status=live}} China,{{cite news|last=Kolesnikov-Jessop|first=Sonia|title=Chinese Swept Up in Mechanical Mania|url=https://www.nytimes.com/2011/11/26/fashion/26iht-ACAW-AUTOMATON26.html|access-date=November 25, 2011|newspaper=The New York Times|date=November 25, 2011|quote=Mechanical curiosities were all the rage in China during the 18th and 19th centuries, as the Qing emperors developed a passion for automaton clocks and pocket watches, and the "Sing Song Merchants", as European watchmakers were called, were more than happy to encourage that interest.|archive-date=May 6, 2014|archive-url=https://web.archive.org/web/20140506094509/http://www.nytimes.com/2011/11/26/fashion/26iht-ACAW-AUTOMATON26.html|url-status=live}} India,{{cite journal |last1=Koetsier |first1=Teun |year=2001 |title=On the prehistory of programmable machines: musical automata, looms, calculators |journal=Mechanism and Machine Theory |volume=36 |issue=5 |pages=589–603 |publisher=Elsevier |doi=10.1016/S0094-114X(01)00005-2}} and so on. Other automated novelties such as Johann Philipp Kirnberger's "Musikalisches Würfelspiel" (Musical Dice Game) 1757 also amused audiences.Nierhaus, Gerhard (2009). Algorithmic Composition: Paradigms of Automated Music Generation, pp. 36 & 38n7. {{ISBN|978-3-211-75539-6}}.

Despite the technical capabilities of these machines, however, none were capable of generating original content and were entirely dependent upon their mechanical designs.

=Rise of artificial intelligence=

{{main|History of artificial intelligence}}

The field of AI research was born at a workshop at Dartmouth College in 1956,

Dartmouth conference:

  • {{Citation | last=McCorduck | first=Pamela | title = Machines Who Think | year = 2004 | edition=2nd | location=Natick, MA | publisher=A. K. Peters, Ltd. | isbn=978-1-56881-205-2 | oclc=52197627 |pages=111–136 |ref=none}}
  • {{Crevier 1993 |ref=none}} pp. 47–49, who writes "the conference is generally recognized as the official birthdate of the new science."
  • {{Russell Norvig 2003 |ref=none}} p. 17, who call the conference "the birth of artificial intelligence."
  • {{Citation|last=NRC|title=Funding a Revolution: Government Support for Computing Research|year=1999|author-link=United States National Research Council|chapter=Developments in Artificial Intelligence|publisher=National Academy Press|isbn=978-0-309-06278-7|oclc=246584055|chapter-url=https://archive.org/details/fundingrevolutio00nati |pages=200–201 |ref=none}}

begetting the rise of digital computing used as a medium of art as well as the rise of generative art. Initial experiments in AI-generated art included the Illiac Suite, a 1957 composition for string quartet which is generally agreed to be the first score composed by an electronic computer.Denis L. Baggi, "[http://www.lim.dico.unimi.it/events/ctama/baggi.htm The Role of Computer Technology in Music and Musicology] {{Webarchive|url=https://web.archive.org/web/20110722063617/http://www.lim.dico.unimi.it/events/ctama/baggi.htm |date=2011-07-22 }}", lim.dico.unimi.it (December 9, 1998). Lejaren Hiller, in collaboration with Leonard Issacson, programmed the ILLIAC I computer at the University of Illinois at Urbana–Champaign (where both composers were professors) to generate compositional material for his String Quartet No. 4.

In 1960, Russian researcher R.Kh.Zaripov published worldwide first paper on algorithmic music composing using the "Ural-1" computer.{{cite journal|last=Zaripov|first=R.Kh.|title=Об алгоритмическом описании процесса сочинения музыки (On algorithmic description of process of music composition)|journal=Proceedings of the USSR Academy of Sciences|year=1960|volume=132|issue=6}}

In 1965, inventor Ray Kurzweil premiered a piano piece created by a computer that was capable of pattern recognition in various compositions. The computer was then able to analyze and use these patterns to create novel melodies. The computer was debuted on Steve Allen's I've Got a Secret program, and stumped the hosts until film star Harry Morgan guessed Ray's secret.{{Cite web | url=http://www.kurzweiltech.com/raybio.html | title=About Ray Kurzweil | access-date=2019-11-25 | archive-date=2011-04-04 | archive-url=https://web.archive.org/web/20110404153322/http://www.kurzweiltech.com/raybio.html | url-status=live }}

Before 1989, artificial neural networks have been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner.{{cite journal | last1 = Bharucha | first1 = J.J. | last2 = Todd | first2 = P.M. | year = 1989 | title = Modeling the perception of tonal structure with neural nets | journal = Computer Music Journal | volume = 13 | issue = 4| pages = 44–53 | doi=10.2307/3679552| jstor = 3679552 }}Todd, P.M., and Loy, D.G. (Eds.) (1991). Music and connectionism. Cambridge, MA: MIT Press.

In 2014, Ian Goodfellow and his colleagues developed a new class of machine learning systems: generative adversarial networks (GAN).{{cite conference |title=Generative Adversarial Networks |first1=Ian |last1=Goodfellow |first2=Jean |last2=Pouget-Abadie |first3=Mehdi |last3=Mirza |first4=Bing |last4=Xu |first5=David |last5=Warde-Farley |first6=Sherjil |last6=Ozair |first7=Aaron |last7=Courville |first8=Yoshua |last8=Bengio |conference=Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014) |pages=2672–2680 |url=https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf |year=2014 |access-date=2019-11-25 |archive-date=2019-11-22 |archive-url=https://web.archive.org/web/20191122034612/http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf |url-status=live }} Two neural networks contest with each other in a game (in the sense of game theory, often but not always in the form of a zero-sum game). Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning,{{cite arXiv |eprint=1606.03498|title=Improved Techniques for Training GANs|last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |class=cs.LG |year=2016}} fully supervised learning,{{cite journal |last1=Isola |first1=Phillip |last2=Zhu |first2=Jun-Yan |last3=Zhou |first3=Tinghui |last4=Efros |first4=Alexei |title=Image-to-Image Translation with Conditional Adversarial Nets |journal=Computer Vision and Pattern Recognition |date=2017 |url=https://phillipi.github.io/pix2pix/ |access-date=2019-11-25 |archive-date=2020-04-14 |archive-url=https://web.archive.org/web/20200414225155/https://phillipi.github.io/pix2pix/ |url-status=live }} and reinforcement learning.{{cite journal |last1=Ho |first1=Jonathon |last2=Ermon |first2=Stefano |title=Generative Adversarial Imitation Learning |journal=Advances in Neural Information Processing Systems |pages=4565–4573 |url=http://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning |year=2016 |arxiv=1606.03476 |bibcode=2016arXiv160603476H |access-date=2019-11-25 |archive-date=2019-10-19 |archive-url=https://web.archive.org/web/20191019222917/http://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning |url-status=live }} In a 2016 seminar, Yann LeCun described GANs as "the coolest idea in machine learning in the last twenty years".{{cite web |last1=LeCun |first1=Yann |title=RL Seminar: The Next Frontier in AI: Unsupervised Learning |website=YouTube |date=November 18, 2016 |url=https://www.youtube.com/watch?v=IbjF5VjniVE |access-date=2019-11-25 |archive-date=2020-04-30 |archive-url=https://web.archive.org/web/20200430143323/https://www.youtube.com/watch?v=IbjF5VjniVE |url-status=live }}

In 2017, Google unveiled transformers,{{cite web |last1=Uszkoreit |first1=Jakob |title=Transformer: A Novel Neural Network Architecture for Language Understanding |url=https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html |website=Google AI Blog |date=31 August 2017 |access-date=21 June 2020 |archive-date=27 October 2021 |archive-url=https://web.archive.org/web/20211027193749/https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html |url-status=live }} a new type of neural network architecture specialized for language modeling that enabled for rapid advancements in natural language processing. Transformers proved capable of high levels of generalization, allowing networks such as GPT-3 and Jukebox from OpenAI to synthesize text and music respectively at a level approaching humanlike ability.{{cite arXiv |title=Language Models are Few-Shot Learners |eprint=2005.14165 |last1=Brown |first1=Tom B. |last2=Mann |first2=Benjamin |last3=Ryder |first3=Nick |last4=Subbiah |first4=Melanie |last5=Kaplan |first5=Jared |last6=Dhariwal |first6=Prafulla |last7=Neelakantan |first7=Arvind |last8=Shyam |first8=Pranav |last9=Sastry |first9=Girish |last10=Askell |first10=Amanda |last11=Agarwal |first11=Sandhini |last12=Herbert-Voss |first12=Ariel |last13=Krueger |first13=Gretchen |last14=Henighan |first14=Tom |last15=Child |first15=Rewon |last16=Ramesh |first16=Aditya |last17=Ziegler |first17=Daniel M. |last18=Wu |first18=Jeffrey |last19=Winter |first19=Clemens |last20=Hesse |first20=Christopher |last21=Chen |first21=Mark |last22=Sigler |first22=Eric |last23=Litwin |first23=Mateusz |last24=Gray |first24=Scott |last25=Chess |first25=Benjamin |last26=Clark |first26=Jack |last27=Berner |first27=Christopher |last28=McCandlish |first28=Sam |last29=Radford |first29=Alec |last30=Sutskever |first30=Ilya |year=2020 |class=cs.CL |display-authors=29 }}{{cite arXiv |title=Jukebox: A Generative Model for Music |eprint=2005.00341 |last1=Dhariwal |first1=Prafulla |last2=Jun |first2=Heewoo |last3=Payne |first3=Christine |author4=Jong Wook Kim |last5=Radford |first5=Alec |last6=Sutskever |first6=Ilya |year=2020 |class=eess.AS }} There have been some attempts to use GPT-3 and GPT-2 for screenplay writing, resulting in both dramatic (the Italian short film Frammenti di Anime Meccaniche{{Cite web |title=Frammenti di anime meccaniche, il primo corto italiano scritto da un'AI |url=https://www.sentieriselvaggi.it/frammenti-di-anime-meccaniche-il-primo-corto-italiano-scritto-da-unai/ |access-date=2022-01-08 |website=Sentieri Selvaggi}}, written by GPT-2) and comedic narratives (the short film Solicitors by YouTube Creator Calamity AI written by GPT-3).{{Cite web|title=Calamity AI|url=https://www.eliaweiss.com/calamity-ai|access-date=2022-01-08|website=Eli Weiss|language=en-US}}

Branches of synthetic media

=Deepfakes=

{{main|Deepfake}}

Deepfakes (a portmanteau of "deep learning" and "fake"{{Cite news | url=https://www.foxnews.com/tech/terrifying-high-tech-porn-creepy-deepfake-videos-are-on-the-rise/ | title=Terrifying high-tech porn: Creepy 'deepfake' videos are on the rise | last=Brandon | first=John | date=2018-02-16 | work=Fox News | access-date=2018-02-20 | language=en-US | archive-date=2018-06-15 | archive-url=https://web.archive.org/web/20180615160819/http://www.foxnews.com/tech/2018/02/16/terrifying-high-tech-porn-creepy-deepfake-videos-are-on-rise.html | url-status=live }}) are the most prominent form of synthetic media.{{cite web |last1=Gregory |first1=Samuel |title=Heard about deepfakes? Don't panic. Prepare |url=https://www.weforum.org/agenda/2018/11/deepfakes-video-pragmatic-preparation-witness/ |website=WE Forum |date=23 November 2018 |publisher=World Economic Forum |access-date=30 January 2020 |archive-date=12 January 2020 |archive-url=https://web.archive.org/web/20200112102413/https://www.weforum.org/agenda/2018/11/deepfakes-video-pragmatic-preparation-witness/ |url-status=live }}{{cite web |last1=Barrabi |first1=Thomas |title=Twitter developing 'synthetic media' policy to combat deepfakes, other harmful posts |url=https://www.foxbusiness.com/technology/twitter-synthetic-media-policy-feedback |website=Fox Business |date=21 October 2019 |publisher=Fox News |access-date=30 January 2020 |archive-date=2 December 2019 |archive-url=https://web.archive.org/web/20191202233157/https://www.foxbusiness.com/technology/twitter-synthetic-media-policy-feedback |url-status=live }} Deepfakes are media productions that uses a an existing image or video and replaces the subject with someone else's likeness using artificial neural networks.{{cite web|url=https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley|title=We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now|last=Cole|first=Samantha|date=24 January 2018|website=Vice|access-date=4 May 2019|archive-date=7 September 2019|archive-url=https://web.archive.org/web/20190907194524/https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley|url-status=live}} They often combine and superimpose existing media onto source media using machine learning techniques known as autoencoders and generative adversarial networks (GANs).{{cite news | last1=Schwartz | first1=Oscar | title=You thought fake news was bad? Deep fakes are where truth goes to die | url=https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth | access-date=14 November 2018 | work=The Guardian | date=12 November 2018 | language=en | archive-date=16 June 2019 | archive-url=https://web.archive.org/web/20190616230351/https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth | url-status=live }} Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud.{{Cite news | url=https://www.highsnobiety.com/p/what-are-deepfakes-ai-porn/ | title=What Are Deepfakes & Why the Future of Porn is Terrifying | date=2018-02-20 | work=Highsnobiety | access-date=2018-02-20 | language=en-US | archive-date=2021-07-14 | archive-url=https://web.archive.org/web/20210714032914/https://www.highsnobiety.com/p/what-are-deepfakes-ai-porn/ | url-status=live }}{{cite web|url=https://theoutline.com/post/3179/deepfake-videos-are-freaking-experts-out|title=Experts fear face swapping tech could start an international showdown|website=The Outline|language=en|access-date=2018-02-28|archive-date=2020-01-16|archive-url=https://web.archive.org/web/20200116140157/https://theoutline.com/post/3179/deepfake-videos-are-freaking-experts-out|url-status=live}}{{Cite news|url=https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html|title=Here Come the Fake Videos, Too|last=Roose|first=Kevin|date=2018-03-04|work=The New York Times|access-date=2018-03-24|language=en-US|issn=0362-4331|archive-date=2019-06-18|archive-url=https://web.archive.org/web/20190618203019/https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html|url-status=live}}{{cite arXiv |title=Adversarial Learning of Deepfakes in Accounting|language=en|eprint = 1910.03810|last1 = Schreyer|first1 = Marco|last2 = Sattarov|first2 = Timur|last3 = Reimer|first3 = Bernd|last4 = Borth|first4 = Damian|year = 2019|class=cs.LG }} This has elicited responses from both industry and government to detect and limit their use.{{Cite web|url=https://deepfakedetectionchallenge.ai/|title=Join the Deepfake Detection Challenge (DFDC)|website=deepfakedetectionchallenge.ai|access-date=2019-11-08|archive-date=2020-01-12|archive-url=https://web.archive.org/web/20200112102819/https://deepfakedetectionchallenge.ai/|url-status=live}}{{Cite web|url=https://www.congress.gov/bill/116th-congress/house-bill/3230|title=H.R.3230 - 116th Congress (2019-2020): Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019|last=Clarke|first=Yvette D.|date=2019-06-28|website=www.congress.gov|access-date=2019-10-16|archive-date=2019-12-17|archive-url=https://web.archive.org/web/20191217110329/https://www.congress.gov/bill/116th-congress/house-bill/3230|url-status=live}}

The term deepfakes originated around the end of 2017 from a Reddit user named "deepfakes". He, as well as others in the Reddit community r/deepfakes, shared deepfakes they created; many videos involved celebrities' faces swapped onto the bodies of actresses in pornographic videos, while non-pornographic content included many videos with actor Nicolas Cage's face swapped into various movies.{{cite web|url=https://mashable.com/2018/01/31/nicolas-cage-face-swapping-deepfakes/|title=People Are Using Face-Swapping Tech to Add Nicolas Cage to Random Movies and What Is 2018|last=Haysom|first=Sam|date=31 January 2018|website=Mashable|access-date=4 April 2019|archive-date=24 July 2019|archive-url=https://web.archive.org/web/20190724221500/https://mashable.com/2018/01/31/nicolas-cage-face-swapping-deepfakes/|url-status=live}} In December 2017, Samantha Cole published an article about r/deepfakes in Vice that drew the first mainstream attention to deepfakes being shared in online communities.{{cite web|url=https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn|title=AI-Assisted Fake Porn Is Here and We're All Fucked|last=Cole|first=Samantha|date=11 December 2017|website=Vice|access-date=19 December 2018|archive-date=7 September 2019|archive-url=https://web.archive.org/web/20190907212225/https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn|url-status=live}} Six weeks later, Cole wrote in a follow-up article about the large increase in AI-assisted fake pornography. In February 2018, r/deepfakes was banned by Reddit for sharing involuntary pornography.{{Cite news|url=https://www.cnbc.com/2018/02/08/reddit-pornhub-ban-deepfake-porn-videos.html|title=Reddit, Pornhub ban videos that use A.I. to superimpose a person's face over an X-rated actor|last=Kharpal|first=Arjun|date=2018-02-08|work=CNBC|access-date=2018-02-20|archive-date=2019-04-10|archive-url=https://web.archive.org/web/20190410050631/https://www.cnbc.com/2018/02/08/reddit-pornhub-ban-deepfake-porn-videos.html|url-status=live}} Other websites have also banned the use of deepfakes for involuntary pornography, including the social media platform Twitter and the pornography site Pornhub.{{Cite web|url=https://www.vice.com/en_us/article/ywqgab/twitter-bans-deepfakes|title=Twitter Is the Latest Platform to Ban AI-Generated Porn|last=Cole|first=Samantha|date=2018-02-06|website=Vice|language=en|access-date=2019-11-08|archive-date=2019-11-01|archive-url=https://web.archive.org/web/20191101165115/https://www.vice.com/en_us/article/ywqgab/twitter-bans-deepfakes|url-status=live}} However, some websites have not yet banned Deepfake content, including 4chan and 8chan.{{cite web|url=https://www.dailydot.com/unclick/deepfake-sites-reddit-ban/|title=Here's where 'deepfakes,' the new fake celebrity porn, went after the Reddit ban|last=Hathaway|first=Jay|date=8 February 2018|website=The Daily Dot|access-date=22 December 2018|archive-date=6 July 2019|archive-url=https://web.archive.org/web/20190706092234/https://www.dailydot.com/unclick/deepfake-sites-reddit-ban/|url-status=live}}

Non-pornographic deepfake content continues to grow in popularity with videos from YouTube creators such as Ctrl Shift Face and Shamook.{{cite web|url=https://nerdist.com/article/deepfake-bill-hader-tom-cruise/|title=Deepfake Technology Turns Bill Hader Into Tom Cruise|last=Walsh|first=Michael|date=19 August 2019|website=Nerdist|access-date=1 June 2020|archive-date=2 June 2020|archive-url=https://web.archive.org/web/20200602061415/https://nerdist.com/article/deepfake-bill-hader-tom-cruise/|url-status=live}}{{cite web|url=https://mashable.com/video/will-smith-takes-keanus-place-in-the-matrix-deepfake/|title=Will Smith takes Keanu's place in 'The Matrix' in new deepfake|last=Moser|first=Andy|date=5 September 2019|website=Mashable|access-date=1 June 2020|archive-date=4 August 2020|archive-url=https://web.archive.org/web/20200804165900/https://mashable.com/video/will-smith-takes-keanus-place-in-the-matrix-deepfake/|url-status=live}} A mobile application, Impressions, was launched for iOS in March 2020. The app provides a platform for users to deepfake celebrity faces into videos in a matter of minutes.{{Cite web|url=https://www.dailydot.com/debug/impressions-deepfake-app/|title=You can now deepfake yourself into a celebrity with just a few clicks|last=Thalen|first=Mikael|website=daily dot|language=en|access-date=2020-04-03|archive-date=2020-04-06|archive-url=https://web.archive.org/web/20200406221457/https://www.dailydot.com/debug/impressions-deepfake-app/|url-status=live}}

=Image synthesis=

Image synthesis is the artificial production of visual media, especially through algorithmic means. In the emerging world of synthetic media, the work of digital-image creation—once the domain of highly skilled programmers and Hollywood special-effects artists—could be automated by expert systems capable of producing realism on a vast scale.{{cite web |last1=Rothman |first1=Joshua |title=In The Age of A.I., Is Seeing Still Believing? |url=https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-still-believing |website=New Yorker |date=5 November 2018 |access-date=30 January 2020 |archive-date=10 January 2020 |archive-url=https://web.archive.org/web/20200110100907/https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-still-believing |url-status=live }} One subfield of this includes human image synthesis, which is the use of neural networks to make believable and even photorealistic renditions[https://ieeexplore.ieee.org/document/568819 Physics-based muscle model for mouth shape control] {{Webarchive|url=https://web.archive.org/web/20190827201246/https://ieeexplore.ieee.org/document/568819 |date=2019-08-27 }} on IEEE Explore (requires membership)[https://ieeexplore.ieee.org/document/531968 Realistic 3D facial animation in virtual space teleconferencing] {{Webarchive|url=https://web.archive.org/web/20190827201250/https://ieeexplore.ieee.org/document/531968 |date=2019-08-27 }} on IEEE Explore (requires membership) of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work. The website This Person Does Not Exist showcases fully automated human image synthesis by endlessly generating images that look like facial portraits of human faces.{{Cite web

|url=https://www.lyrn.ai/2018/12/26/a-style-based-generator-architecture-for-generative-adversarial-networks/

|title=Style-based GANs – Generating and Tuning Realistic Artificial Faces

|last=Horev

|first=Rani

|date=2018-12-26

|website=Lyrn.AI

|access-date=2019-02-16

|archive-date=2020-11-05

|archive-url=https://web.archive.org/web/20201105101517/https://www.lyrn.ai/2018/12/26/a-style-based-generator-architecture-for-generative-adversarial-networks/

|url-status=live

}}

=Audio synthesis=

Beyond deepfakes and image synthesis, audio is another area where AI is used to create synthetic media.{{cite web |last1=Ovadya |first1=Aviv |last2=Whittlestone |first2=Jess |title=Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning |url=https://www.researchgate.net/publication/334735395 |website=researchgate.net |access-date=30 January 2020 |archive-date=27 October 2021 |archive-url=https://web.archive.org/web/20211027193750/https://www.researchgate.net/publication/334735395_Reducing_malicious_use_of_synthetic_media_research_Considerations_and_potential_release_practices_for_machine_learning |url-status=live }} Synthesized audio will be capable of generating any conceivable sound that can be achieved through audio waveform manipulation, which might conceivably be used to generate stock audio of sound effects or simulate audio of currently imaginary things.{{cite web |title=Ultra Fast Audio Synthesis with MelGAN |url=https://www.descript.com/post/ultra-fast-audio-synthesis-with-melgan |website=Descript.com |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130165731/https://www.descript.com/post/ultra-fast-audio-synthesis-with-melgan |url-status=live }}

=AI art=

{{Excerpt|Artificial intelligence art}}

{{Excerpt|Artificial intelligence art|Imagery|hat=no}}

==Music generation==

{{main|Computer music|Music and artificial intelligence|Pop music automation}}

The capacity to generate music through autonomous, non-programmable means has long been sought after since the days of Antiquity, and with developments in artificial intelligence, two particular domains have arisen:

  1. The robotic creation of music, whether through machines playing instruments or sorting of virtual instrument notes (such as through MIDI files){{Cite web|url=http://people.bu.edu/bkulis/projects/music/index.html|title=Combining Deep Symbolic and Raw Audio Music Models|website=people.bu.edu|access-date=2020-02-01|archive-date=2020-02-15|archive-url=https://web.archive.org/web/20200215211324/http://people.bu.edu/bkulis/projects/music/index.html|url-status=live}}{{Cite web|url=https://www.researchgate.net/publication/334415395|title=A White Paper on the Future of Artificial Intelligence|first1=Helmut|last1=Linde|first2=Immanuel|last2=Schweizer|date=July 5, 2019|via=ResearchGate|access-date=February 1, 2020|archive-date=October 27, 2021|archive-url=https://web.archive.org/web/20211027193752/https://www.researchgate.net/publication/334415395_A_White_Paper_on_the_Future_of_Artificial_Intelligence|url-status=live}}
  2. Directly generating waveforms that perfectly recreate instrumentation and human voice without the need for instruments, MIDI, or organizing premade notes.{{Cite web|url=https://openreview.net/forum?id=H1xQVn09FX|title=GANSynth: Adversarial Neural Audio Synthesis|first1=Jesse|last1=Engel|first2=Kumar Krishna|last2=Agrawal|first3=Shuo|last3=Chen|first4=Ishaan|last4=Gulrajani|first5=Chris|last5=Donahue|first6=Adam|last6=Roberts|date=September 27, 2018|via=openreview.net|access-date=February 1, 2020|archive-date=February 14, 2020|archive-url=https://web.archive.org/web/20200214233038/https://openreview.net/forum?id=H1xQVn09FX|url-status=live}}

=Speech synthesis=

{{main|Speech synthesis}}

Speech synthesis has been identified as a popular branch of synthetic media{{cite web |last1=Kambhampati |first1=Subbarao |title=Perception won't be reality, once AI can manipulate what we see |url=https://thehill.com/opinion/cybersecurity/470826-perception-wont-be-reality-once-ai-can-manipulate-what-we-see |website=TheHill |date=17 November 2019 |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130061806/https://thehill.com/opinion/cybersecurity/470826-perception-wont-be-reality-once-ai-can-manipulate-what-we-see |url-status=live }} and is defined as the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.{{Cite book |first1=Jonathan |last1=Allen |first2=M. Sharon |last2=Hunnicutt |first3=Dennis |last3=Klatt |title=From Text to Speech: The MITalk system |publisher=Cambridge University Press |year=1987 |isbn=978-0-521-30641-6 |url-access=registration |url=https://archive.org/details/fromtexttospeech00alle }}

Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output.{{Cite journal | doi = 10.1121/1.386780 | last1 = Rubin | first1 = P. | last2 = Baer | first2 = T. | last3 = Mermelstein | first3 = P. | year = 1981 | title = An articulatory synthesizer for perceptual research | journal = Journal of the Acoustical Society of America | volume = 70 | issue = 2| pages = 321–328 | bibcode = 1981ASAJ...70..321R }}

Virtual assistants such as Siri and Alexa have the ability to turn text into audio and synthesize speech.{{cite web |last1=Oyedeji |first1=Miracle |title=Beginner's Guide to Synthetic Media and its Effects on Journalism |url=https://www.stateofdigitalpublishing.com/audience-development/beginners-guide-to-synthetic-media/ |website=State of Digital Publishing |date=14 October 2019 |access-date=1 February 2020 |archive-date=1 February 2020 |archive-url=https://web.archive.org/web/20200201044338/https://www.stateofdigitalpublishing.com/audience-development/beginners-guide-to-synthetic-media/ |url-status=live }}

In 2016, Google DeepMind unveiled WaveNet, a deep generative model of raw audio waveforms that could learn to understand which waveforms best resembled human speech as well as musical instrumentation.{{Cite web | url=https://deepmind.com/blog/article/wavenet-generative-model-raw-audio | title=WaveNet: A Generative Model for Raw Audio | date=September 8, 2016 | access-date=2019-11-25 | archive-date=2021-10-27 | archive-url=https://web.archive.org/web/20211027193750/https://deepmind.com/blog/article/wavenet-generative-model-raw-audio | url-status=live }} Some projects offer real-time generations of synthetic speech using deep learning, such as 15.ai, a web application text-to-speech tool developed by an MIT research scientist.{{cite web

|url= https://kotaku.com/this-website-lets-you-make-glados-say-whatever-you-want-1846062835

|title= Website Lets You Make GLaDOS Say Whatever You Want

|last= Zwiezen

|first= Zack

|date= 2021-01-18

|website= Kotaku

|access-date= 2021-01-18

|quote=

|archive-date= 2021-01-17

|archive-url= https://web.archive.org/web/20210117164748/https://kotaku.com/this-website-lets-you-make-glados-say-whatever-you-want-1846062835

|url-status= live

}}{{cite magazine

|url= https://www.gameinformer.com/gamer-culture/2021/01/18/make-portals-glados-and-other-beloved-characters-say-the-weirdest-things

|title= Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App

|last= Ruppert

|first= Liana

|date= 2021-01-18

|magazine= Game Informer

|access-date= 2021-01-18

|quote=

|archive-date= 2021-01-18

|archive-url= https://web.archive.org/web/20210118175543/https://www.gameinformer.com/gamer-culture/2021/01/18/make-portals-glados-and-other-beloved-characters-say-the-weirdest-things

|url-status= live

}}{{cite web

|url= https://www.pcgamer.com/make-the-cast-of-tf2-recite-old-memes-with-this-ai-text-to-speech-tool

|title= Make the cast of TF2 recite old memes with this AI text-to-speech tool

|last= Clayton

|first= Natalie

|date= 2021-01-19

|website= PC Gamer

|access-date= 2021-01-19

|quote=

|archive-date= 2021-01-19

|archive-url= https://web.archive.org/web/20210119133726/https://www.pcgamer.com/make-the-cast-of-tf2-recite-old-memes-with-this-ai-text-to-speech-tool/

|url-status= live

}}{{cite web

|url= https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/

|title= Put words in game characters' mouths with this fascinating text to speech tool

|last= Morton

|first= Lauren

|date= 2021-01-18

|website= Rock, Paper, Shotgun

|access-date= 2021-01-18

|quote=

|archive-date= 2021-01-18

|archive-url= https://web.archive.org/web/20210118213308/https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/

|url-status= live

}}

=Natural-language generation=

{{Main|Computational creativity#Story generation|Computational creativity#Poetry}}

Natural-language generation (NLG, sometimes synonymous with text synthesis) is a software process that transforms structured data into natural language. It can be used to produce long form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out by a text-to-speech system. Interest in natural-language generation increased in 2019 after OpenAI unveiled GPT2, an AI system that generates text matching its input in subject and tone.{{cite web |last1=Clark |first1=Jack |last2=Brundage |first2=Miles |last3=Solaiman |first3=Irene |author-link3=Irene Solaiman |date=20 August 2019 |title=GPT-2: 6-Month Follow-Up |url=https://openai.com/blog/gpt-2-6-month-follow-up/ |url-status=live |archive-url=https://web.archive.org/web/20200218150602/https://openai.com/blog/gpt-2-6-month-follow-up/ |archive-date=18 February 2020 |access-date=1 February 2020 |website=OpenAI}} GPT2 is a transformer, a deep machine learning model introduced in 2017 used primarily in the field of natural language processing (NLP).{{cite arXiv|last1=Polosukhin|first1=Illia|last2=Kaiser|first2=Lukasz|last3=Gomez|first3=Aidan N.|last4=Jones|first4=Llion|last5=Uszkoreit|first5=Jakob|last6=Parmar|first6=Niki|last7=Shazeer|first7=Noam|last8=Vaswani|first8=Ashish|date=2017-06-12|title=Attention Is All You Need|eprint=1706.03762|class=cs.CL}}

=Interactive media synthesis=

AI-generated media can be used to develop a hybrid graphics system that could be used in video games, movies, and virtual reality,{{Cite web|url=https://www.theverge.com/2018/12/3/18121198/ai-generated-video-game-graphics-nvidia-driving-demo-neurips|title=Nvidia has created the first video game demo using AI-generated graphics|first=James|last=Vincent|date=December 3, 2018|website=The Verge|access-date=February 2, 2020|archive-date=January 25, 2020|archive-url=https://web.archive.org/web/20200125003723/https://www.theverge.com/2018/12/3/18121198/ai-generated-video-game-graphics-nvidia-driving-demo-neurips|url-status=live}} as well as text-based games such as AI Dungeon 2, which uses either GPT-2 or GPT-3 to allow for near-infinite possibilities that are otherwise impossible to create through traditional game development methods.{{cite magazine | url=https://www.wired.com/story/ai-fueled-dungeon-game-got-much-darker/ | title=It Began as an AI-Fueled Dungeon Game. It Got Much Darker | magazine=Wired | last1=Simonite | first1=Tom }}{{cite web | url=https://www.utahbusiness.com/latitude-games-ai-dungeon-was-changing-the-face-of-ai-generated-content-until-its-users-turned-against-it/ | title=Latitude Games' AI Dungeon was changing the face of AI-generated content | date=22 June 2021 }}{{cite web | url=https://kotaku.com/in-ai-dungeon-2-you-can-do-anything-even-start-a-rock-1840276553 | title=In AI Dungeon 2, You Can do Anything--Even Start a Rock Band Made of Skeletons | date=7 December 2019 }} Computer hardware company Nvidia has also worked on developed AI-generated video game demos, such as a model that can generate an interactive game based on non-interactive videos.{{Cite web|url=https://www.vice.com/en_us/article/a3mjx8/ai-can-generate-interactive-virtual-worlds-based-on-a-simple-videos|title=AI Can Generate Interactive Virtual Worlds Based on Simple Videos|first=Daniel|last=Oberhaus|date=December 3, 2018|access-date=February 2, 2020|archive-date=May 21, 2020|archive-url=https://web.archive.org/web/20200521224011/https://www.vice.com/en_us/article/a3mjx8/ai-can-generate-interactive-virtual-worlds-based-on-a-simple-videos|url-status=live}}

Concerns and controversies

Apart from organizational attack, political organizations and leaders are more suffered from such deep fake videos. In 2022, a deep fake was released where Ukraine president was calling for a surrender the fight against Russia. The video shows Ukrainian president telling his soldiers to lay down their arms and surrender.{{Cite web |last=Allyn |first=Bobby |date=March 16, 2022 |title=Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn |url=https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia |website=NPR}}

Deepfakes have been used to misrepresent well-known politicians in videos. In separate videos, the face of the Argentine President Mauricio Macri has been replaced by the face of Adolf Hitler, and Angela Merkel's face has been replaced with Donald Trump's.{{cite web |title=Wenn Merkel plötzlich Trumps Gesicht trägt: die gefährliche Manipulation von Bildern und Videos |publisher=az Aargauer Zeitung |date=2018-02-03 |url=https://www.aargauerzeitung.ch/leben/digital/wenn-merkel-ploetzlich-trumps-gesicht-traegt-die-gefaehrliche-manipulation-von-bildern-und-videos-132155720 |access-date=2019-11-25 |archive-date=2019-04-13 |archive-url=https://web.archive.org/web/20190413014251/https://www.aargauerzeitung.ch/leben/digital/wenn-merkel-ploetzlich-trumps-gesicht-traegt-die-gefaehrliche-manipulation-von-bildern-und-videos-132155720 |url-status=live }}{{cite web |author=Patrick Gensing |url=http://faktenfinder.tagesschau.de/hintergrund/deep-fakes-101.html |title=Deepfakes: Auf dem Weg in eine alternative Realität? |access-date=2019-11-25 |archive-date=2018-10-11 |archive-url=https://web.archive.org/web/20181011182211/http://faktenfinder.tagesschau.de/hintergrund/deep-fakes-101.html |url-status=live }}

{{Anchor|DeepNude}}In June 2019, a downloadable Windows and Linux application called DeepNude was released which used neural networks, specifically generative adversarial networks, to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50.{{cite web |last1=Cole |first1=Samantha |last2=Maiberg |first2=Emanuel |last3=Koebler |first3=Jason |title=This Horrifying App Undresses a Photo of Any Woman with a Single Click |url=https://www.vice.com/en_us/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman |website=Vice |access-date=2 July 2019 |date=26 June 2019 |archive-date=2 July 2019 |archive-url=https://web.archive.org/web/20190702011315/https://www.vice.com/en_us/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman |url-status=live }}{{cite news |url=https://www.vice.com/en_us/article/8xzjpk/github-removed-open-source-versions-of-deepnude-app-deepfakes |publisher=Vice Media |title=GitHub Removed Open Source Versions of DeepNude |first=Joseph |last=Cox |date=July 9, 2019 |access-date=November 25, 2019 |archive-date=September 24, 2020 |archive-url=https://web.archive.org/web/20200924083833/https://www.vice.com/en_us/article/8xzjpk/github-removed-open-source-versions-of-deepnude-app-deepfakes |url-status=live }} On June 27 the creators removed the application and refunded consumers.{{cite news | url=https://www.bbc.com/news/technology-48799045 | title=App that can remove women's clothes from images shut down | work=BBC News | date=28 June 2019 }}

The US Congress held a senate meeting discussing the widespread impacts of synthetic media, including deepfakes, describing it as having the "potential to be used to undermine national security, erode

public trust in our democracy and other nefarious reasons."{{cite web |title=Deepfake Report Act of 2019 |url=https://www.congress.gov/congressional-report/116th-congress/senate-report/93/1 |website=Congress.gov |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130061803/https://www.congress.gov/congressional-report/116th-congress/senate-report/93/1 |url-status=live }}

In 2019, voice cloning technology was used to successfully impersonate a chief executive's voice and demand a fraudulent transfer of €220,000.{{Cite news | url=https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 | title=Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case | newspaper=Wall Street Journal | date=30 August 2019 | access-date=2019-11-26 | archive-date=2019-11-20 | archive-url=https://web.archive.org/web/20191120200616/https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 | url-status=live | last1=Stupp | first1=Catherine }} The case raised concerns about the lack of encryption methods over telephones as well as the unconditional trust often given to voice and to media in general.{{Cite news| url=https://www.wsj.com/articles/ai-could-make-cyberattacks-more-dangerous-harder-to-detect-1542128667?mod=article_inline| title=AI Could Make Cyberattacks More Dangerous, Harder to Detect| newspaper=Wall Street Journal| date=2018-11-13| last1=Janofsky| first1=Adam| access-date=2019-11-26| archive-date=2019-11-25| archive-url=https://web.archive.org/web/20191125212111/https://www.wsj.com/articles/ai-could-make-cyberattacks-more-dangerous-harder-to-detect-1542128667?mod=article_inline| url-status=live}}

Starting in November 2019, multiple social media networks began banning synthetic media used for purposes of manipulation in the lead-up to the 2020 United States presidential election.{{cite web |last1=Newton |first1=Casey |title=Facebook's deepfakes ban has some obvious workarounds |url=https://www.theverge.com/interface/2020/1/8/21054906/facebook-deepfakes-ban-loopholes-parody-satire-cheap-fakes |website=The Verge |date=8 January 2020 |access-date=30 January 2020 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130165730/https://www.theverge.com/interface/2020/1/8/21054906/facebook-deepfakes-ban-loopholes-parody-satire-cheap-fakes |url-status=live }}

In 2024, Elon Musk shared a parody without clarifying that it’s a satire but raised his voice against AI in politics.{{Cite web |date=2024-07-28 |title=A parody ad shared by Elon Musk clones Kamala Harris' voice, raising concerns about AI in politics |url=https://apnews.com/article/parody-ad-ai-harris-musk-x-misleading-3a5df582f911a808d34f68b766aa3b8e |access-date=2024-10-22 |website=AP News |language=en}} The shared contains Kamala Harris saying things she never said in real life. A few lines from the video transcription include, “I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” The voice then says that Kamala is a “Diversity hire”, and that she has no idea about “the first thing about running the country”.{{Cite web |date=2024-07-28 |title=A parody ad shared by Elon Musk clones Kamala Harris' voice, raising concerns about AI in politics |url=https://apnews.com/article/parody-ad-ai-harris-musk-x-misleading-3a5df582f911a808d34f68b766aa3b8e |access-date=2024-10-22 |website=AP News |language=en}}

These are some examples of synthetic media potentially affecting the public reaction to celebrities, political party or organizations, business or MNCs. The potential to harm their image and reputation is concerning. It may also erode social trust in public and private institutions, and it will be harder to maintain a belief in their ability to verify or authenticate "true" over "fake" content.{{Cite journal |last1=Chesney |first1=Bobby |last2=Citron |first2=Danielle |date=2019 |title=Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security |url=https://www.jstor.org/stable/26891938 |journal=California Law Review |volume=107 |issue=6 |pages=1753–1820 |jstor=26891938 |issn=0008-1221}} Citron (2019) lists the public officials who may be most affected are, “elected officials, appointed officials, judges, juries, legislators, staffers, and agencies.” Even private institutions will have to develop an awareness and policy responses to this new media form, particularly if they have a wider impact on society.{{Cite journal |last1=Chesney |first1=Bobby |last2=Citron |first2=Danielle |date=2019 |title=Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security |url=https://www.jstor.org/stable/26891938 |journal=California Law Review |volume=107 |issue=6 |pages=1753–1820 |jstor=26891938 |issn=0008-1221}} Citron (2019) further states, “religious institutions are an obvious target, as are politically engaged entities ranging from Planned Parenthood to the NRA. {{Cite journal |last1=Chesney |first1=Bobby |last2=Citron |first2=Danielle |date=2019 |title=Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security |url=https://www.jstor.org/stable/26891938 |journal=California Law Review |volume=107 |issue=6 |pages=1753–1820 |jstor=26891938 |issn=0008-1221}}” Indeed, researchers are concerned that synthetic media may deepen and extend social hierarchy or class differences which gave rise to them in the first place.{{Cite journal |last1=Chesney |first1=Bobby |last2=Citron |first2=Danielle |date=2019 |title=Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security |url=https://www.jstor.org/stable/26891938 |journal=California Law Review |volume=107 |issue=6 |pages=1753–1820 |jstor=26891938 |issn=0008-1221}} The major concern tends to revolve around synthetic media is that it isn’t only a matter of proving something that is wrong, it’s also a concern of proving that something is original.{{Cite web |title=Deepfakes: What They Are & How Your Business Is at Risk |url=https://business.bofa.com/en-us/content/cyber-security-journal/deepfakes-business-risks.html#deepfakerisksforcompanies |access-date=2024-10-22 |website=Bank of America |language=en-US}} For example, a recent study shows that two out three cyber security professionals noticed that deepfakes used as part of disinformation against business in 2022, which is apparently a 13% increase in number from the previous year.{{Cite web |title=Deepfakes: What They Are & How Your Business Is at Risk |url=https://business.bofa.com/en-us/content/cyber-security-journal/deepfakes-business-risks.html#deepfakerisksforcompanies |access-date=2024-10-22 |website=Bank of America |language=en-US}}

Potential uses and impacts

Synthetic media techniques involve generating, manipulating, and altering data to emulate creative processes on a much faster and more accurate scale.{{Cite web|url=https://blog.paperspace.com/2020-guide-to-synthetic-media/|title=2020 Guide to Synthetic Media|date=January 17, 2020|website=Paperspace Blog|access-date=January 30, 2020|archive-date=January 30, 2020|archive-url=https://web.archive.org/web/20200130125834/https://blog.paperspace.com/2020-guide-to-synthetic-media/|url-status=live}} As a result, the potential uses are as wide as human creativity itself, ranging from revolutionizing the entertainment industry to accelerating the research and production of academia. The initial application has been to synchronize lip-movements to increase the engagement of normal dubbing{{Cite news|url=https://www.economist.com/christmas-specials/2019/12/21/dubbing-is-coming-to-a-small-screen-near-you|title=Dubbing is coming to a small screen near you|newspaper=The Economist|access-date=2020-02-13|issn=0013-0613|archive-date=2020-02-12|archive-url=https://web.archive.org/web/20200212182035/https://www.economist.com/christmas-specials/2019/12/21/dubbing-is-coming-to-a-small-screen-near-you|url-status=live}} that is growing fast with the rise of OTTs.{{Cite web|url=https://www.hollywoodreporter.com/news/netflix-s-global-reach-sparks-dubbing-revolution-public-demands-it-1229761|title=Netflix's Global Reach Sparks Dubbing Revolution: "The Public Demands It"|website=The Hollywood Reporter|date=13 August 2019 |language=en|access-date=2020-02-13|archive-date=2020-04-04|archive-url=https://web.archive.org/web/20200404032511/https://www.hollywoodreporter.com/news/netflix-s-global-reach-sparks-dubbing-revolution-public-demands-it-1229761|url-status=live}} News organizations have explored ways to use video synthesis and other synthetic media technologies to become more efficient and engaging.{{Cite news|url=https://www.reuters.com/article/rpb-synthesia-prototype-idUSKBN2011O3|title=Reuters and Synthesia unveil AI prototype for automated video reports|date=2020-02-07|work=Reuters|access-date=2020-02-13|language=en|archive-date=2020-02-13|archive-url=https://web.archive.org/web/20200213083225/https://www.reuters.com/article/rpb-synthesia-prototype-idUSKBN2011O3|url-status=live}}{{Cite web|url=https://www.bbc.co.uk/blogs/internet/entries/b81f12d4-39b7-4624-86ab-01647d2800ec|title=Can synthetic media drive new content experiences?|date=2020-01-29|website=BBC|language=en|access-date=2020-02-13|archive-date=2020-02-13|archive-url=https://web.archive.org/web/20200213083226/https://www.bbc.co.uk/blogs/internet/entries/b81f12d4-39b7-4624-86ab-01647d2800ec|url-status=live}} Potential future hazards include the use of a combination of different subfields to generate fake news,{{Cite web|url=https://www.cnbc.com/2019/10/15/deepfakes-could-be-problem-for-the-2020-election.html|title=Fake videos could be the next big problem in the 2020 elections|first=Grace|last=Shao|date=October 15, 2019|website=CNBC|access-date=November 25, 2019|archive-date=November 15, 2019|archive-url=https://web.archive.org/web/20191115053849/https://www.cnbc.com/2019/10/15/deepfakes-could-be-problem-for-the-2020-election.html|url-status=live}} natural-language bot swarms generating trends and memes, false evidence being generated, and potentially addiction to personalized content and a retreat into AI-generated fantasy worlds within virtual reality.

Advanced text-generating bots could potentially be used to manipulate social media platforms through tactics such as astroturfing.{{cite web | url=https://techpolicy.press/assessing-the-risks-of-language-model-deepfakes-to-democracy/ | title=Assessing the risks of language model "deepfakes" to democracy | date=21 May 2021 }}{{Cite news|url=https://www.businessinsider.com/elon-musk-warns-of-advanced-ai-manipulating-social-media-2019-9|title=Elon Musk has warned that 'advanced AI' could poison social media|last=Hamilton|first=Isobel|date=2019-09-26|access-date=2019-11-25|archive-date=2019-12-21|archive-url=https://web.archive.org/web/20191221214054/https://www.businessinsider.com/elon-musk-warns-of-advanced-ai-manipulating-social-media-2019-9|url-status=live}}

Deep reinforcement learning-based natural-language generators could potentially be used to create advanced chatbots that could imitate natural human speech.{{Cite arXiv| title=A Deep Reinforcement Learning Chatbot | last1=Serban | first1=Iulian V. | last2=Sankar | first2=Chinnadhurai | last3=Germain | first3=Mathieu | last4=Zhang | first4=Saizheng | last5=Lin | first5=Zhouhan | last6=Subramanian | first6=Sandeep | last7=Kim | first7=Taesup | last8=Pieper | first8=Michael | last9=Chandar | first9=Sarath | last10=Ke | first10=Nan Rosemary | last11=Rajeshwar | first11=Sai | last12=De Brebisson | first12=Alexandre | last13=Sotelo | first13=Jose M. R. | last14=Suhubdy | first14=Dendi | last15=Michalski | first15=Vincent | last16=Nguyen | first16=Alexandre | last17=Pineau | first17=Joelle | last18=Bengio | first18=Yoshua | year=2017 | class=cs.CL | eprint=1709.02349 }}

One use case for natural-language generation is to generate or assist with writing novels and short stories,{{Cite web|url=https://www.theatlantic.com/technology/archive/2018/10/automated-on-the-road/571345/|title=When an AI Goes Full Jack Kerouac|first=Brian|last=Merchant|date=October 1, 2018|website=The Atlantic|access-date=November 25, 2019|archive-date=January 30, 2020|archive-url=https://web.archive.org/web/20200130190959/https://www.theatlantic.com/technology/archive/2018/10/automated-on-the-road/571345/|url-status=live}} while other potential developments are that of stylistic editors to emulate professional writers.{{cite news |url=https://www.theatlantic.com/technology/archive/2018/10/automated-on-the-road/571345/ |first=Brian |last=Merchant |title=When an AI Goes Full Jack Kerouac |newspaper=The Atlantic |date=1 October 2018 |access-date=25 November 2019 |archive-date=30 January 2020 |archive-url=https://web.archive.org/web/20200130190959/https://www.theatlantic.com/technology/archive/2018/10/automated-on-the-road/571345/ |url-status=live }}

Image synthesis tools may be able to streamline or even completely automate the creation of certain aspects of visual illustrations, such as animated cartoons, comic books, and political cartoons.{{Cite web|url=https://venturebeat.com/2017/06/02/pixar-veteran-creates-a-i-tool-for-automating-2d-animations/|title=Pixar veteran creates AI tool for automating 2D animations|date=June 2, 2017|access-date=November 25, 2019|archive-date=June 11, 2019|archive-url=https://web.archive.org/web/20190611011925/https://venturebeat.com/2017/06/02/pixar-veteran-creates-a-i-tool-for-automating-2d-animations/|url-status=live}} Because the automation process takes away the need for teams of designers, artists, and others involved in the making of entertainment, costs could plunge to virtually nothing and allow for the creation of "bedroom multimedia franchises" where singular people can generate results indistinguishable from the highest budget productions for little more than the cost of running their computer.{{Cite web|url=https://www.synthesia.io/blog-post/the-future-of-synthetic-media|title=Synthesia|website=www.synthesia.io|access-date=2020-02-12|archive-date=2021-10-27|archive-url=https://web.archive.org/web/20211027193752/https://www.synthesia.io/post/the-future-of-synthetic-media|url-status=live}} Character and scene creation tools will no longer be based on premade assets, thematic limitations, or personal skill but instead based on tweaking certain parameters and giving enough input.{{Cite web|url=https://medium.com/@radiomonkeys/the-age-of-imaginative-machines-the-coming-democratization-of-art-animation-and-imagination-b0723237d61a|title=The Age of Imaginative Machines: The Coming Democratization of Art, Animation, and Imagination|first=Yuli|last=Ban|date=January 3, 2020|website=Medium|access-date=February 1, 2020|archive-date=February 1, 2020|archive-url=https://web.archive.org/web/20200201051636/https://medium.com/@radiomonkeys/the-age-of-imaginative-machines-the-coming-democratization-of-art-animation-and-imagination-b0723237d61a|url-status=live}}

A combination of speech synthesis and deepfakes has been used to automatically redub an actor's speech into multiple languages without the need for reshoots or language classes. It can also be used by companies for employee onboarding, eLearning, explainer and how-to videos.{{cite web |title=use cases for text-to-speech and AI avatars |url=https://elai.io/use-cases |website=Elai.io |access-date=15 August 2022}}

An increase in cyberattacks has also been feared due to methods of phishing, catfishing, and social hacking being more easily automated by new technological methods.

Natural-language generation bots mixed with image synthesis networks may theoretically be used to clog search results, filling search engines with trillions of otherwise useless but legitimate-seeming blogs, websites, and marketing spam.{{Cite web|url=https://www.theverge.com/2019/7/2/19063562/ai-text-generation-spam-marketing-seo-fractl-grover-google|title=Endless AI-generated spam risks clogging up Google's search results|first=James|last=Vincent|date=July 2, 2019|website=The Verge|access-date=December 1, 2019|archive-date=December 6, 2019|archive-url=https://web.archive.org/web/20191206141328/https://www.theverge.com/2019/7/2/19063562/ai-text-generation-spam-marketing-seo-fractl-grover-google|url-status=live}}

There has been speculation about deepfakes being used for creating digital actors for future films. Digitally constructed/altered humans have already been used in films before, and deepfakes could contribute new developments in the near future.{{Cite news|url=https://www.theguardian.com/film/2019/jul/03/in-the-age-of-deepfakes-could-virtual-actors-put-humans-out-of-business|title=In the age of deepfakes, could virtual actors put humans out of business?|last=Kemp|first=Luke|date=2019-07-08|work=The Guardian|access-date=2019-10-20|language=en-GB|issn=0261-3077|archive-date=2019-10-20|archive-url=https://web.archive.org/web/20191020223601/https://www.theguardian.com/film/2019/jul/03/in-the-age-of-deepfakes-could-virtual-actors-put-humans-out-of-business|url-status=live}} Amateur deepfake technology has already been used to insert faces into existing films, such as the insertion of Harrison Ford's young face onto Han Solo's face in Solo: A Star Wars Story,{{Cite web|url=https://www.polygon.com/2018/10/17/17989214/harrison-ford-solo-movie-deepfake-technology|title=Harrison Ford is the star of Solo: A Star Wars Story thanks to deepfake technology|last=Radulovic|first=Petrana|date=2018-10-17|website=Polygon|language=en|access-date=2019-10-20|archive-date=2019-10-20|archive-url=https://web.archive.org/web/20191020223601/https://www.polygon.com/2018/10/17/17989214/harrison-ford-solo-movie-deepfake-technology|url-status=live}} and techniques similar to those used by deepfakes were used for the acting of Princess Leia in Rogue One.{{Cite web|url=https://www.technologyreview.com/s/612241/how-acting-as-carrie-fishers-puppet-made-a-career-for-rogue-ones-princess-leia/|title=How acting as Carrie Fisher's puppet made a career for Rogue One's Princess Leia|last=Winick|first=Erin|website=MIT Technology Review|language=en-US|access-date=2019-10-20|archive-date=2019-10-23|archive-url=https://web.archive.org/web/20191023063609/https://www.technologyreview.com/s/612241/how-acting-as-carrie-fishers-puppet-made-a-career-for-rogue-ones-princess-leia/|url-status=live}}

GANs can be used to create photos of imaginary fashion models, with no need to hire a model, photographer, makeup artist, or pay for a studio and transportation.{{cite web |last1=Wong |first1=Ceecee |title=The Rise of AI Supermodels |url=https://www.cdotrends.com/story/14300/rise-ai-supermodels |website=CDO Trends |date=May 27, 2019 |access-date=2019-11-25 |archive-date=2020-04-16 |archive-url=https://web.archive.org/web/20200416005026/https://www.cdotrends.com/story/14300/rise-ai-supermodels |url-status=live }} GANs can be used to create fashion advertising campaigns including more diverse groups of models, which may increase intent to buy among people resembling the models{{cite web |last1=Dietmar |first1=Julia |title=GANs and Deepfakes Could Revolutionize The Fashion Industry |url=https://www.forbes.com/sites/forbestechcouncil/2019/05/21/gans-and-deepfakes-could-revolutionize-the-fashion-industry/#515624593d17 |work=Forbes |access-date=2019-11-25 |archive-date=2019-09-04 |archive-url=https://web.archive.org/web/20190904214151/https://www.forbes.com/sites/forbestechcouncil/2019/05/21/gans-and-deepfakes-could-revolutionize-the-fashion-industry/#515624593d17 |url-status=live }} or family members.{{cite web |last1=Hamosova |first1=Lenka |title=Personalized Synthetic Advertising — the future for applied synthetic media. |url=https://medium.com/@lenkahamosova/personalized-synthetic-advertising-the-future-for-applied-synthetic-media-fc054c2ae19e |work=Medium |date=10 July 2020 |access-date=2020-11-27 |archive-date=2020-12-05 |archive-url=https://web.archive.org/web/20201205200652/https://medium.com/@lenkahamosova/personalized-synthetic-advertising-the-future-for-applied-synthetic-media-fc054c2ae19e |url-status=live }} GANs can also be used to create portraits, landscapes and album covers. The ability for GANs to generate photorealistic human bodies presents a challenge to industries such as fashion modeling, which may be at heightened risk of being automated.{{Cite web|url=https://research.zalando.com/welcome/mission/research-projects/generative-fashion-design/|title=Generative Fashion Design|access-date=2019-11-25|archive-date=2020-12-03|archive-url=https://web.archive.org/web/20201203004054/http://research.zalando.com/welcome/mission/research-projects/generative-fashion-design/|url-status=live}}{{Cite web|url=https://syncedreview.com/2019/08/29/ai-creates-fashion-models-with-custom-outfits-and-poses/|title=AI Creates Fashion Models With Custom Outfits and Poses|date=August 29, 2019|website=Synced|access-date=November 25, 2019|archive-date=January 9, 2020|archive-url=https://web.archive.org/web/20200109042212/https://syncedreview.com/2019/08/29/ai-creates-fashion-models-with-custom-outfits-and-poses/|url-status=live}}

In 2019, Dadabots unveiled an AI-generated stream of death metal which remains ongoing with no pauses.{{Cite web|url=https://newatlas.com/dadabots-death-metal-neural-network-livestream/59394/|title=Meet Dadabots, the AI death metal band playing non-stop on Youtube|date=April 23, 2019|website=New Atlas|access-date=January 15, 2020|archive-date=January 15, 2020|archive-url=https://web.archive.org/web/20200115160039/https://newatlas.com/dadabots-death-metal-neural-network-livestream/59394/|url-status=live}}

Musical artists and their respective brands may also conceivably be generated from scratch, including AI-generated music, videos, interviews, and promotional material. Conversely, existing music can be completely altered at will, such as changing lyrics, singers, instrumentation, and composition.{{Cite web|url=https://www.theverge.com/2019/4/26/18517803/openai-musenet-artificial-intelligence-ai-music-generation-lady-gaga-harry-potter-mozart|title=OpenAI's MuseNet generates AI music at the push of a button|first=Jon|last=Porter|date=April 26, 2019|website=The Verge|access-date=November 25, 2019|archive-date=June 28, 2019|archive-url=https://web.archive.org/web/20190628164236/https://www.theverge.com/2019/4/26/18517803/openai-musenet-artificial-intelligence-ai-music-generation-lady-gaga-harry-potter-mozart|url-status=live}} In 2018, using a process by WaveNet for timbre musical transfer, researchers were able to shift entire genres from one to another.{{Cite web|url=https://www.youtube.com/watch?v=YQAupr7JxNY|title=TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer|date=November 27, 2018 |via=www.youtube.com|access-date=2020-03-11|archive-date=2019-12-31|archive-url=https://web.archive.org/web/20191231133810/https://www.youtube.com/watch?v=YQAupr7JxNY|url-status=live}} Through the use of artificial intelligence, old bands and artists may be "revived" to release new material without pause, which may even include "live" concerts and promotional images.

Neural network-powered photo manipulation also has the potential to support problematic behavior of various state actors, not just totalitarian and absolutist regimes.{{Cite web|url=https://www.fpri.org/article/2019/06/the-national-security-challenges-of-artificial-intelligence-manipulated-media-and-deepfakes/|title=The National Security Challenges of Artificial Intelligence, Manipulated Media, and "Deepfakes" - Foreign Policy Research Institute|last=Watts|first=Chris|language=en-US|access-date=2020-02-12|archive-date=2020-05-20|archive-url=https://web.archive.org/web/20200520143133/https://www.fpri.org/article/2019/06/the-national-security-challenges-of-artificial-intelligence-manipulated-media-and-deepfakes/|url-status=live}}

A sufficiently technically competent government or community may use synthetic media to engage in a rewrite of history using various synthetic technologies, fabricating history and personalities as well as changing ways of thinking – a form of potential epistemicide. Even in otherwise rational and democratic societies, certain social and political groups may use synthetic media to craft cultural, political, and scientific filter-bubbles that greatly reduce or even altogether undermine the ability of the public to agree on basic objective facts. Conversely, the existence of synthetic media may be used to discredit factual news sources and scientific facts as "potentially fabricated."

See also

References