Deepfake pornography

{{Short description|Explicit material applying deepfake technology}}

{{Artificial intelligence}}

Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

History

The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms. It is a combination of the word "deep learning", which refers to the program used to create the videos, and "fake" meaning the videos are not real.{{Citation |last1=Gaur |first1=Loveleen |title=DeepFakes |date=2022-07-27 |url=http://dx.doi.org/10.1201/9781003231493-7 |pages=91–98 |access-date=2023-04-20 |place=New York |publisher=CRC Press |isbn=978-1-003-23149-3 |last2=Arora |first2=Gursimar Kaur |doi=10.1201/9781003231493-7 |archive-date=2024-03-06 |archive-url=https://web.archive.org/web/20240306051459/https://www.taylorfrancis.com/chapters/edit/10.1201/9781003231493-7/deepfakes-loveleen-gaur-gursimar-kaur-arora |url-status=live }}

Deepfake pornography was originally created on a small individual scale using a combination of machine learning algorithms, computer vision techniques, and AI software. The process began by gathering a large amount of source material (including both images and videos) of a person's face, and then using a deep learning model to train a Generative Adversarial Network to create a fake video that convincingly swaps the face of the source material onto the body of a pornographic performer. However, the production process has significantly evolved since 2018, with the advent of several public apps that have largely automated the process.Azmoodeh, Amin, and Ali Dehghantanha. "Deep Fake Detection, Deterrence and Response: Challenges and Opportunities." arXiv.org, 2022.

Deepfake pornography is sometimes confused with fake nude photography, but the two are mostly different. Fake nude photography typically uses non-sexual images and merely makes it appear that the people in them are nude.

Notable cases

Deepfake technology has been used to create non-consensual and pornographic images and videos of famous women. One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online. Since then, there have been numerous instances of similar deepfake content targeting other female celebrities, such as Emma Watson, Natalie Portman, and Scarlett Johansson.{{Cite web |last=Roettgers |first=Janko |date=2018-02-21 |title=Porn Producers Offer to Help Hollywood Take Down Deepfake Videos |url=https://variety.com/2018/digital/news/deepfakes-porn-adult-industry-1202705749/ |access-date=2023-04-20 |website=Variety |language=en-US |archive-date=2019-06-10 |archive-url=https://web.archive.org/web/20190610220204/https://variety.com/2018/digital/news/deepfakes-porn-adult-industry-1202705749/ |url-status=live }} Johansson spoke publicly on the issue in December 2018, condemning the practice but also refusing legal action because she views the harassment as inevitable.{{Cite news |title=Scarlett Johansson on fake AI-generated sex videos: 'Nothing can stop someone from cutting and pasting my image' |language=en-US |newspaper=The Washington Post |first=Drew |last=Harwell |date=2018-12-31 |url=https://www.washingtonpost.com/technology/2018/12/31/scarlett-johansson-fake-ai-generated-sex-videos-nothing-can-stop-someone-cutting-pasting-my-image/ |access-date=2023-04-20 |issn=0190-8286 |archive-date=2019-06-13 |archive-url=https://web.archive.org/web/20190613160558/https://www.washingtonpost.com/technology/2018/12/31/scarlett-johansson-fake-ai-generated-sex-videos-nothing-can-stop-someone-cutting-pasting-my-image/ |url-status=live }}

= Rana Ayyub =

In 2018, Rana Ayyub, an Indian investigative journalist, was the target of an online hate campaign stemming from her condemnation of the Indian government, specifically her speaking out against the rape of an eight-year-old Kashmiri girl. Ayyub was bombarded with rape and death threats, and had doctored pornographic video of her circulated online.{{Cite journal |last=Maddocks |first=Sophie |date=2020-06-04 |title='A Deepfake Porn Plot Intended to Silence Me': exploring continuities between pornographic and 'political' deep fakes |url=http://dx.doi.org/10.1080/23268743.2020.1757499 |journal=Porn Studies |volume=7 |issue=4 |pages=415–423 |doi=10.1080/23268743.2020.1757499 |s2cid=219910130 |issn=2326-8743 |access-date=2023-04-20 |archive-date=2024-03-06 |archive-url=https://web.archive.org/web/20240306051420/https://www.tandfonline.com/doi/full/10.1080/23268743.2020.1757499 |url-status=live }} In a Huffington Post article, Ayyub discussed the long-lasting psychological and social effects this experience has had on her. She explained that she continued to struggle with her mental health and how the images and videos continued to resurface whenever she took a high-profile case.{{Cite web |date=2018-11-21 |title=I Was The Victim Of A Deepfake Porn Plot Intended To Silence Me |url=https://www.huffingtonpost.co.uk/entry/deepfake-porn_uk_5bf2c126e4b0f32bd58ba316 |access-date=2023-04-20 |website=HuffPost UK |first=Rana |last=Ayyub |language=en |archive-date=2023-04-20 |archive-url=https://web.archive.org/web/20230420045217/https://www.huffingtonpost.co.uk/entry/deepfake-porn_uk_5bf2c126e4b0f32bd58ba316 |url-status=live }}

= Atrioc controversy =

In 2023, Twitch streamer Atrioc stirred controversy when he accidentally revealed deepfake pornographic material featuring female Twitch streamers while on live. The influencer has since admitted to paying for AI generated porn, and apologized to the women and his fans.{{Cite web |last=Middleton |first=Amber |title=A Twitch streamer was caught watching deepfake porn of women gamers. Sexual images made without consent can be traumatic and abusive, experts say — and women are the biggest victims. |url=https://www.insider.com/atrioc-caught-qtcinderella-ai-picture-twitch-deepfake-controversy-streamer-trauma-2023-2 |access-date=2023-04-20 |website=Insider |date=2023-02-10 |language=en-US |archive-date=2024-03-06 |archive-url=https://web.archive.org/web/20240306051443/https://www.businessinsider.com/atrioc-caught-qtcinderella-ai-picture-twitch-deepfake-controversy-streamer-trauma-2023-2 |url-status=live }}{{Cite web |title=Twitch streamer Atrioc gives tearful apology after paying for deepfakes of female streamers |url=https://www.dexerto.com/entertainment/twitch-streamer-atrioc-gives-tearful-apology-after-paying-for-deepfakes-of-female-streamers-2047162/ |access-date=2023-06-14 |website=Dexerto |first=Calum |last=Patterson |date=2023-01-30 |language=en |archive-date=2023-05-09 |archive-url=https://web.archive.org/web/20230509175132/https://www.dexerto.com/entertainment/twitch-streamer-atrioc-gives-tearful-apology-after-paying-for-deepfakes-of-female-streamers-2047162/ |url-status=live }}

= Taylor Swift =

{{main article|Taylor Swift deepfake pornography controversy}}

In January 2024, AI-generated sexually explicit images of American singer Taylor Swift were posted on X (formerly Twitter), and spread to other platforms such as Facebook, Reddit and Instagram.{{Cite news |last=Stokel-Walker |first=Chris |date=January 25, 2024 |title=The explicit AI-created images of Taylor Swift flooding the internet highlight a major problem with generative AI |url=https://www.fastcompany.com/91016953/deepfake-taylor-swift-ai-dangers |access-date=January 26, 2024 |work=Fast Company |archive-date=January 26, 2024 |archive-url=https://web.archive.org/web/20240126011706/https://www.fastcompany.com/91016953/deepfake-taylor-swift-ai-dangers |url-status=live }}{{Cite web |last=Belanger |first=Ashley |date=2024-01-25 |title=X can't stop spread of explicit, fake AI Taylor Swift images |url=https://arstechnica.com/tech-policy/2024/01/fake-ai-taylor-swift-images-flood-x-amid-calls-to-criminalize-deepfake-porn/ |access-date=2024-01-25 |website=Ars Technica |language=en-us |archive-date=2024-01-25 |archive-url=https://web.archive.org/web/20240125204714/https://arstechnica.com/tech-policy/2024/01/fake-ai-taylor-swift-images-flood-x-amid-calls-to-criminalize-deepfake-porn/ |url-status=live }}{{Cite web |last=Kelly |first=Samantha Murphy |date=2024-01-25 |title=Explicit, AI-generated Taylor Swift images spread quickly on social media {{!}} CNN Business |url=https://www.cnn.com/2024/01/25/tech/taylor-swift-ai-generated-images/index.html |access-date=2024-01-25 |website=CNN |language=en |archive-date=2024-01-25 |archive-url=https://web.archive.org/web/20240125210152/https://www.cnn.com/2024/01/25/tech/taylor-swift-ai-generated-images/index.html |url-status=live }} One tweet with the images was viewed over 45 million times before being removed.{{Cite web |last=Weatherbed |first=Jess |date=2024-01-25 |title=Trolls have flooded X with graphic Taylor Swift AI fakes |url=https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending |access-date=2024-01-25 |website=The Verge |language=en |archive-date=2024-01-25 |archive-url=https://web.archive.org/web/20240125170548/https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending |url-status=live }} A report from 404 Media found that the images appeared to have originated from a Telegram group, whose members used tools such as Microsoft Designer to generate the images, using misspellings and keyword hacks to work around Designer's content filters.{{Cite web |last1=Maiberg |first1=Emanuel |last2=Cole · |first2=Samantha |date=2024-01-25 |title=AI-Generated Taylor Swift Porn Went Viral on Twitter. Here's How It Got There |url=https://www.404media.co/ai-generated-taylor-swift-porn-twitter/ |access-date=2024-01-25 |website=404 Media |language=en |archive-date=2024-01-25 |archive-url=https://web.archive.org/web/20240125173515/https://www.404media.co/ai-generated-taylor-swift-porn-twitter/ |url-status=live }}{{Cite web |last=Belanger |first=Ashley |date=2024-01-29 |title=Drastic moves by X, Microsoft may not stop spread of fake Taylor Swift porn |url=https://arstechnica.com/tech-policy/2024/01/drastic-moves-by-x-microsoft-may-not-stop-spread-of-fake-taylor-swift-porn/ |access-date=2024-01-30 |website=Ars Technica |language=en-us |archive-date=2024-01-29 |archive-url=https://web.archive.org/web/20240129222757/https://arstechnica.com/tech-policy/2024/01/drastic-moves-by-x-microsoft-may-not-stop-spread-of-fake-taylor-swift-porn/ |url-status=live }} After the material was posted, Swift's fans posted concert footage and images to bury the deepfake images, and reported the accounts posting the deepfakes.{{Cite web |last=Zhang |first=Cat |date=2024-01-26 |title=The Swiftie Fight to Protect Taylor Swift From AI |url=https://www.thecut.com/2024/01/taylor-swift-ai-deepfake-trending-social-media.html |url-status=live |archive-url=https://web.archive.org/web/20240130163639/https://www.thecut.com/2024/01/taylor-swift-ai-deepfake-trending-social-media.html |archive-date=2024-01-30 |access-date=2024-03-06 |website=The Cut |language=en}} Searches for Swift's name were temporarily disabled on X, returning an error message instead.{{cite magazine |last=Spangler |first=Todd |date=2024-01-27 |title=X/Twitter Blocks Searches for 'Taylor Swift' as a 'Temporary Action to Prioritize Safety' After Deluge of Explicit AI Fakes |url=https://variety.com/2024/digital/news/x-twitter-blocks-searches-taylor-swift-explicit-nude-ai-fakes-1235889742/ |access-date=2024-01-29 |magazine=Variety |archive-date=2024-01-28 |archive-url=https://web.archive.org/web/20240128055704/https://variety.com/2024/digital/news/x-twitter-blocks-searches-taylor-swift-explicit-nude-ai-fakes-1235889742/ |url-status=live }} Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community.{{Cite news |last=Hsu |first=Tiffany |date=February 5, 2024 |title=Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says |url=https://www.nytimes.com/2024/02/05/business/media/taylor-swift-ai-fake-images.html |access-date=February 10, 2024 |work=The New York Times |archive-date=February 9, 2024 |archive-url=https://web.archive.org/web/20240209144437/https://www.nytimes.com/2024/02/05/business/media/taylor-swift-ai-fake-images.html |url-status=live }}{{Cite web |last=Belanger |first=Ashley |date=2024-02-05 |title=4chan daily challenge sparked deluge of explicit AI Taylor Swift images |url=https://arstechnica.com/tech-policy/2024/02/4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images/ |access-date=2024-02-09 |website=Ars Technica |language=en-us |archive-date=2024-02-09 |archive-url=https://web.archive.org/web/20240209035029/https://arstechnica.com/tech-policy/2024/02/4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images/ |url-status=live }}

A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge."{{Cite web |last=Specter |first=Emma |date=2024-01-26 |title=If Anyone Can Stop the Coming AI Hellscape, It's Taylor Swift |url=https://www.vogue.com/article/taylor-swift-deepfake-x-possible-legal-action |access-date=2024-03-06 |website=Vogue |language=en-US |archive-date=2024-02-06 |archive-url=https://web.archive.org/web/20240206223948/https://www.vogue.com/article/taylor-swift-deepfake-x-possible-legal-action |url-status=live }}

The controversy drew condemnation from White House Press Secretary Karine Jean-Pierre,{{cite news |url=https://www.theguardian.com/music/2024/jan/28/taylor-swift-x-searches-blocked-fake-explicit-images |title=Taylor Swift searches blocked on X after fake explicit images of pop singer spread |newspaper=The Guardian |agency=Reuters |date=2024-01-29 |access-date=2024-01-29 |archive-date=2024-01-29 |archive-url=https://web.archive.org/web/20240129004152/https://www.theguardian.com/music/2024/jan/28/taylor-swift-x-searches-blocked-fake-explicit-images |url-status=live }} Microsoft CEO Satya Nadella,{{cite magazine |url=https://variety.com/2024/digital/news/taylor-swift-ai-fake-images-microsoft-ceo-1235889371/ |title=Taylor Swift Explicit AI-Generated Deepfakes Are 'Alarming and Terrible,' Microsoft CEO Says: 'We Have to Act' |last=Spangler |first=Todd |magazine=Variety |date=2024-01-26 |access-date=2024-01-29 |archive-date=2024-01-28 |archive-url=https://web.archive.org/web/20240128235516/https://variety.com/2024/digital/news/taylor-swift-ai-fake-images-microsoft-ceo-1235889371/ |url-status=live }} the Rape, Abuse & Incest National Network,{{cite news |last1=Travers |first1=Karen |last2=Saliba |first2=Emmanuelle |title=Fake explicit Taylor Swift images: White House is 'alarmed' |url=https://abcnews.go.com/US/white-house-calls-legislation-regulate-ai-amid-explicit/story?id=106718520 |date=2024-01-27 |access-date=2024-01-29 |website=ABC News |archive-date=2024-01-28 |archive-url=https://web.archive.org/web/20240128222818/https://abcnews.go.com/US/white-house-calls-legislation-regulate-ai-amid-explicit/story?id=106718520 |url-status=live }} and SAG-AFTRA.{{cite magazine |url=https://www.rollingstone.com/music/music-news/sag-aftra-taylor-swift-ai-images-legislation-1234955473/ |title=AI-Generated Explicit Taylor Swift Images 'Must Be Made Illegal,' Says SAG-AFTRA |last=Millman |first=Ethan |magazine=Rolling Stone |date=2024-01-26 |access-date=2024-01-29 |archive-date=2024-01-29 |archive-url=https://web.archive.org/web/20240129124355/https://www.rollingstone.com/music/music-news/sag-aftra-taylor-swift-ai-images-legislation-1234955473/ |url-status=live }} Several US politicians called for federal legislation against deepfake pornography.{{cite news |url=https://www.theguardian.com/music/2024/jan/26/taylor-swift-deepfake-pornography-sparks-renewed-calls-for-us-legislation |title=Taylor Swift deepfake pornography sparks renewed calls for US legislation |last=Beaumont-Thomas |first=Ben |newspaper=The Guardian |date=2024-01-27 |access-date=2024-01-29 |archive-date=2024-01-29 |archive-url=https://web.archive.org/web/20240129031352/https://www.theguardian.com/music/2024/jan/26/taylor-swift-deepfake-pornography-sparks-renewed-calls-for-us-legislation |url-status=live }} Later in the month, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made non-consensually.{{cite web |url=https://www.theguardian.com/technology/2024/jan/30/taylor-swift-ai-deepfake-nonconsensual-sexual-images-bill |title=Taylor Swift AI images prompt US bill to tackle nonconsensual, sexual deepfakes |last=Montgomery |first=Blake |newspaper=The Guardian |date=January 31, 2024 |access-date=January 31, 2024 |archive-date=January 31, 2024 |archive-url=https://web.archive.org/web/20240131021340/https://www.theguardian.com/technology/2024/jan/30/taylor-swift-ai-deepfake-nonconsensual-sexual-images-bill |url-status=live }}

=2024 Telegram deepfake scandal=

It emerged in South Korea in August 2024, that many teachers and female students were victims of deepfake images created by users who utilized AI technology. Journalist Ko Narin of The Hankyoreh uncovered the deepfake images through Telegram chats.{{cite web|url=https://www.bbc.com/news/articles/cpdlpj9zn9go|title=Inside the deepfake porn crisis engulfing Korean schools|date=3 September 2024 |publisher=BBC News}}{{cite web|url=https://www.reuters.com/world/asia-pacific/south-korea-police-launch-probe-into-telegram-over-online-sex-crimes-yonhap-2024-09-02/|title=South Korea police launch probe into whether Telegram abets online sex crimes, Yonhap reports|publisher=reuters}}{{cite news|url=https://www.theguardian.com/world/2024/sep/13/from-spy-cams-to-deepfake-porn-fury-in-south-korea-as-women-targeted-again|title=

From spy cams to deepfake porn: fury in South Korea as women targeted again|work=The Guardian|date=

13 September 2024|last1=

Rashid|first1=

Raphael|last2=

McCurry|first2=

Justin}} On Telegram, group chats were created specifically for image-based sexual abuse of women, including middle and high school students, teachers, and even family members. Women with photos on social media platforms like KakaoTalk, Instagram, and Facebook are often targeted as well. Perpetrators use AI bots to generate fake images, which are then sold or widely shared, along with the victims’ social media accounts, phone numbers, and KakaoTalk usernames. One Telegram group reportedly drew around 220,000 members, according to a Guardian report.

Investigations revealed numerous chat groups on Telegram where users, mainly teenagers, create and share explicit deepfake images of classmates and teachers. The issue came in the wake of a troubling history of digital sex crimes, notably the notorious Nth Room case in 2019. The Korean Teachers Union estimated that more than 200 schools had been affected by these incidents. Activists called for a "national emergency" declaration to address the problem.{{Cite web |title=South Korea faces deepfake porn 'emergency' |url=https://www.bbc.com/news/articles/cg4yerrg451o |access-date=2024-09-27 |website=BBC |date=28 August 2024 |language=en-GB}} South Korean police reported over 800 deepfake sex crime cases by the end of September 2024, a stark rise from just 156 cases in 2021, with most victims and offenders being teenagers.{{Cite web |date=2024-09-26 |title=South Korea to criminalize watching or possessing sexually explicit deepfakes |url=https://edition.cnn.com/2024/09/26/asia/south-korea-deepfake-bill-passed-intl-hnk/index.html |access-date=2024-09-27 |website=CNN |language=en}}

On September 21, 6,000 people gathered at Marronnier Park in northeastern Seoul to demand stronger legal action against deepfake crimes targeting women.{{Cite web |date=2024-09-22 |title=Thousands rally in Seoul for stronger action against deepfake crimes |url=https://www.koreatimes.co.kr/www/nation/2024/09/113_382845.html |access-date=2024-09-27 |website=Korea Times |language=en}} On September 26, following widespread outrage over the Telegram scandal, South Korean lawmakers passed a bill criminalizing the possession or viewing of sexually explicit deepfake images and videos, imposing penalties that include prison terms and fines. Under the new law, those caught buying, saving, or watching such material could face up to three years in prison or fines up to 30 million won ($22,600). At the time the bill was proposed, creating sexually explicit deepfakes for distribution carried a maximum penalty of five years, but the new legislation would increase this to seven years, regardless of intent.

By October 2024, it was estimated that "nudify" deep fake bots on Telegram were up to four million monthly users.{{Cite web|url=https://www.vice.com/en/article/nudify-deepfake-bots-telegram/?utm_source=dlvr.it&utm_medium=mastodon|title='Nudify' Deepfake Bots on Telegram Are Up to 4 Million Monthly Users|first=Sammi|last=Caramela|date=2024-10-16|accessdate=2024-10-16}}{{cite magazine |last1=Burgess |first1=Matt |title=Millions of People Are Using Abusive AI 'Nudify' Bots on Telegram |url=https://www.wired.com/story/ai-deepfake-nudify-bots-telegram/ |accessdate=2024-11-02 |magazine=Wired |date=2024-10-15}}

Ethical considerations

= Deepfake child pornography =

Deepfake technology has made the creation of child pornography, faster and easier than it has ever been. Deepfakes can be used to produce new child pornography from already existing material or creating pornography from children who have not been subjected to sexual abuse. Deepfake child pornography can, however, have real and direct implications on children including defamation, grooming, extortion, and bullying.{{Cite journal |last=Kirchengast |first=T |date=2020 |title=Deepfakes and image manipulation: criminalisation and control. |journal=Information & Communications Technology Law |volume=29 |issue=3 |pages=308–323 |doi=10.1080/13600834.2020.1794615 |s2cid=221058610}}

=Differences from generative AI pornography=

{{main article|Generative AI pornography}}

While both deepfake pornography and generative AI pornography utilize synthetic media, they differ in approach and ethical implications.{{cite news |last1=Marr |first1=Bernard |title=How AI Is Transforming Porn And Adult Entertainment |url=https://www.forbes.com/sites/bernardmarr/2019/09/27/how-ai-is-transforming-porn-and-adult-entertainment/ |access-date=December 4, 2024 |work=Forbes |date=September 27, 2019 |language=en}} Generative AI pornography is created entirely through algorithms, producing hyper-realistic content unlinked to real individuals.{{cite news |last1=Rowland |first1=Tim |title=AI porn is now a thing, and I'm ready to let the modern culture bus go on without me |url=https://www.heraldmailmedia.com/story/opinion/columns/2023/04/13/artificial-intelligence-pornography-seems-like-a-bridge-too-far-tim-rowland/70107085007/ |access-date=December 4, 2024 |work=Herald-Mail Media |date=April 13, 2023}}{{cite news |last1=Harwell |first1=Drew |title=AI-generated child sex images spawn new nightmare for the web |url=https://www.washingtonpost.com/technology/2023/06/19/artificial-intelligence-child-sex-abuse-images/ |access-date=December 4, 2024 |newspaper=The Washington Post |date=June 19, 2023}} In contrast, Deepfake pornography alters existing footage of real individuals, often without consent, by superimposing faces or modifying scenes.{{cite news |title=Will AI porn transform adult entertainment – and is that a good thing? |url=https://theweek.com/media/ai-porn-adult-entertainment |access-date=December 4, 2024 |work=The Week |date=February 29, 2024 |language=en}}{{cite news |last1=Hurst |first1=Luke |title=How AI is driving an explosive rise in deepfake pornography |url=https://www.euronews.com/next/2023/10/20/generative-ai-fueling-spread-of-deepfake-pornography-across-the-internet |access-date=November 19, 2024 |work=Euronews |date=October 20, 2023 |language=en}} Hany Farid, a digital image analysis expert, has emphasized these distinctions.{{cite magazine |last1=Dickson |first1=Ej |title=They're Selling Nudes of Imaginary Women on Reddit -- and It's Working |url=https://www.rollingstone.com/culture/culture-features/ai-nudes-selling-reddit-1234708474/ |access-date=December 4, 2024 |magazine=Rolling Stone |date=April 10, 2023}}

= Consent =

Most deepfake pornography is made using the faces of people who did not consent to their image being used in such a sexual way. In 2023, Sensity, an identify verification company, has found that "96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content."{{Cite web |date=2023-03-27 |title=Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy |url=https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071 |access-date=2023-11-30 |website=NBC News |language=en |archive-date=2023-11-29 |archive-url=https://web.archive.org/web/20231129145930/https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071 |url-status=live }}

Combatting deepfake pornography

= Technical approach =

Deepfake detection has become an increasingly important area of research in recent years as the spread of fake videos and images has become more prevalent. One promising approach to detecting deepfakes is through the use of Convolutional Neural Networks (CNNs), which have shown high accuracy in distinguishing between real and fake images. One CNN-based algorithm that has been developed specifically for deepfake detection is DeepRhythm, which has demonstrated an impressive accuracy score of 0.98 (i.e. successful at detecting deepfake images 98% of the time). This algorithm utilizes a pre-trained CNN to extract features from facial regions of interest and then applies a novel attention mechanism to identify discrepancies between the original and manipulated images. While the development of more sophisticated deepfake technology presents ongoing challenges to detection efforts, the high accuracy of algorithms like DeepRhythm offers a promising tool for identifying and mitigating the spread of harmful deepfakes.

Aside from detection models, there are also video authenticating tools available to the public. In 2019, Deepware launched the first publicly available detection tool which allowed users to easily scan and detect deepfake videos. Similarly, in 2020 Microsoft released a free and user-friendly video authenticator. Users upload a suspected video or input a link, and receive a confidence score to assess the level of manipulation in a deepfake.

= Legal approach =

{{As of|2023|post=,}} there is a lack of legislation that specifically addresses deepfake pornography. Instead, the harm caused by its creation and distribution is being addressed by the courts through existing criminal and civil laws.[https://www.jdsupra.com/legalnews/nudify-me-the-legal-implications-of-ai-2348218/ Nudify Me: The Legal Implications of AI-Generated Revenge Porn]

Victims of deepfake pornography often have claims for revenge porn, tort claims, and harassment.{{Cite web |title=Nudify Me: The Legal Implications of AI-Generated Revenge Porn |url=https://www.jdsupra.com/legalnews/nudify-me-the-legal-implications-of-ai-2348218/ |access-date=2024-03-14 |website=JD Supra |language=en |archive-date=2024-03-14 |archive-url=https://web.archive.org/web/20240314140445/https://www.jdsupra.com/legalnews/nudify-me-the-legal-implications-of-ai-2348218/ |url-status=live }} The legal consequences for revenge porn vary from state to state and country to country.{{Cite journal |last=Kirchengast |first=Tyrone |date=2020-07-16 |title=Deepfakes and image manipulation: criminalisation and control |url=http://dx.doi.org/10.1080/13600834.2020.1794615 |journal=Information & Communications Technology Law |volume=29 |issue=3 |pages=308–323 |doi=10.1080/13600834.2020.1794615 |issn=1360-0834 |s2cid=221058610 |access-date=2023-04-20 |archive-date=2024-01-26 |archive-url=https://web.archive.org/web/20240126011710/https://www.tandfonline.com/doi/full/10.1080/13600834.2020.1794615 |url-status=live }} For instance, in Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison,{{Cite web |last=Branch |first=Legislative Services |date=2023-01-16 |title=Consolidated federal laws of Canada, Criminal Code |url=https://laws-lois.justice.gc.ca/eng/acts/C-46/section-162.1.html |access-date=2023-04-20 |website=laws-lois.justice.gc.ca |archive-date=2023-06-03 |archive-url=https://web.archive.org/web/20230603182558/https://laws-lois.justice.gc.ca/eng/acts/C-46/section-162.1.html |url-status=live }} whereas in Malta it is a fine of up to €5,000.{{Cite journal |last=Mania |first=Karolina |date=2022 |title=Legal Protection of Revenge and Deepfake Porn Victims in the European Union: Findings From a Comparative Legal Study |url=https://doi.org/10.1177/15248380221143772 |journal=Trauma, Violence, & Abuse |volume=25 |issue=1 |pages=117–129 |doi=10.1177/15248380221143772 |pmid=36565267 |s2cid=255117036 |access-date=2023-04-20 |archive-date=2024-01-26 |archive-url=https://web.archive.org/web/20240126011735/https://journals.sagepub.com/doi/10.1177/15248380221143772 |url-status=live }}

The "Deepfake Accountability Act" was introduced to the United States Congress in 2019 but died in 2020.{{Cite web |title=Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019 (2019 - H.R. 3230) |url=https://www.govtrack.us/congress/bills/116/hr3230 |access-date=2023-11-27 |website=GovTrack.us |language=en |archive-date=2023-12-03 |archive-url=https://web.archive.org/web/20231203000807/https://www.govtrack.us/congress/bills/116/hr3230 |url-status=live }} It aimed to make the production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both. A newer version of bill was introduced in 2021 which would have required any "advanced technological false personation records" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements. The bill also includes that failure to disclose this information with intent to harass or humiliate a person with an "advanced technological false personation record" containing sexual content "shall be fined under this title, imprisoned for not more than 5 years, or both." However this bill has since died in 2023.{{Cite web |title=DEEP FAKES Accountability Act (2021 - H.R. 2395) |url=https://www.govtrack.us/congress/bills/117/hr2395 |access-date=2023-11-27 |website=GovTrack.us |language=en |archive-date=2023-12-03 |archive-url=https://web.archive.org/web/20231203000806/https://www.govtrack.us/congress/bills/117/hr2395 |url-status=live }}

In the United Kingdom, the Law Commission for England and Wales recommended reform to criminalise sharing of deepfake pornography in 2022.{{Cite news |last=Hill |first=Amelia |date=2022-07-07 |title=Criminal reforms target 'deepfake' and nonconsensual pornographic imagery |url=https://www.theguardian.com/law/2022/jul/07/criminal-reforms-target-deepfake-and-nonconsenual-pornographic-imagery |access-date=2024-08-18 |work=The Guardian |language=en-GB |issn=0261-3077}} In 2023, the government announced amendments to the Online Safety Bill to that end. The Online Safety Act 2023 amends the Sexual Offences Act 2003 to criminalise sharing intimate images that shows or "appears to show" another (thus including deepfake images) without consent.{{Cite legislation UK|type=act|year=2023|chapter=50|act=Online Safety Act 2023|section=188}} In 2024, the Government announced that an offence criminalising the production of deepfake pornographic images would be included in the Criminal Justice Bill of 2024.{{Cite web |title=Government cracks down on 'deepfakes' creation |url=https://www.gov.uk/government/news/government-cracks-down-on-deepfakes-creation |access-date=2024-08-18 |website=GOV.UK |language=en}}{{Cite news |date=2024-04-16 |title=Creating sexually explicit deepfakes to become a criminal offence |url=https://www.bbc.co.uk/news/uk-68823042 |access-date=2024-08-18 |work=BBC News |language=en-GB}} The Bill did not pass before Parliament was dissolved before the general election.

In South Korea, the creation, distribution, or possession of deepfake pornography is classified as a sex crime, with a mandatory prison sentence between three to seven years as part of the country's Special Act on Sexual Violence Crimes.{{Citeweb|url=https://www.yahoo.com/entertainment/johnny-somali-guaranteed-prison-time-222406985.html|title=Johnny Somali Guaranteed Prison Time in South Korea After AI Deepfake Scandal|last=Thomas|first=Quincy|publisher=Yahoo!|quote=Under South Korean law, the creation, distribution, or possession of non-consensual AI-generated explicit content is classified as a sexual crime. Legal experts have confirmed that this offense carries a mandatory prison sentence of up to seven years.|date=27 March 2025|accessdate=12 April 2025}}

== Controlling the distribution ==

While the legal landscape remains undeveloped, victims of deepfake pornography have several tools available to contain and remove content, including securing removal through a court order, intellectual property tools like the DMCA takedown, reporting for terms and conditions violations of the hosting platform, and removal by reporting the content to search engines.{{Cite web |title=Un-Nudify Me: Removal Options for Deepfake Pornography Victims |url=https://www.jdsupra.com/legalnews/un-nudify-me-removal-options-for-7686408/ |access-date=2024-03-14 |website=JD Supra |language=en |archive-date=2024-03-14 |archive-url=https://web.archive.org/web/20240314140445/https://www.jdsupra.com/legalnews/un-nudify-me-removal-options-for-7686408/ |url-status=live }}

Several major online platforms have taken steps to ban deepfake pornography. {{As of|2018|post=,}} gfycat, reddit, Twitter, Discord, and Pornhub have all prohibited the uploading and sharing of deepfake pornographic content on their platforms.{{Cite web |last=Kharpal |first=Arjun |title=Reddit, Pornhub ban videos that use A.I. to superimpose a person's face over an X-rated actor |url=https://www.cnbc.com/2018/02/08/reddit-pornhub-ban-deepfake-porn-videos.html |access-date=2023-04-20 |website=CNBC |date=8 February 2018 |language=en |archive-date=2019-04-10 |archive-url=https://web.archive.org/web/20190410050631/https://www.cnbc.com/2018/02/08/reddit-pornhub-ban-deepfake-porn-videos.html |url-status=live }}{{Cite web |last=Cole |first=Samantha |date=2018-01-31 |title=AI-Generated Fake Porn Makers Have Been Kicked Off Their Favorite Host |url=https://www.vice.com/en/article/vby5jx/deepfakes-ai-porn-removed-from-gfycat |access-date=2023-04-20 |website=Vice |language=en |archive-date=2023-04-20 |archive-url=https://web.archive.org/web/20230420045215/https://www.vice.com/en/article/vby5jx/deepfakes-ai-porn-removed-from-gfycat |url-status=live }} In September of that same year, Google also added "involuntary synthetic pornographic imagery" to its ban list, allowing individuals to request the removal of such content from search results.{{Cite news |last=Harwell |first=Drew |date=2018-12-30 |title=Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target' |language=en-US |newspaper=The Washington Post |url=https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/ |access-date=2023-04-20 |issn=0190-8286 |archive-date=2019-06-14 |archive-url=https://web.archive.org/web/20190614074709/https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/ |url-status=live }}

See also

References