Last month, viral AI-generated pornographic pictures of Taylor Swift circulated on X (formerly Twitter), with one post remaining for 17 hours and receiving more than 45 million views, 24,000 reposts, and hundreds of thousands of likes before the verified account was suspended for violating platform policy. The images, allegedly created using a company’s text-to-image tool Designer, originated from a challenge on 4chan. The posts spurred an explosion of comments and “Protect Taylor Swift” hashtags on X by the army of “Swifties” (the name used by Taylor Swift supporters) seeking to bury the pornographic content. Ultimately, the controversy sparked the attention of Members of Congress.
Regrettably, Taylor Swift is not the only victim of deepfake porn. Malicious actors on the internet have been targeting teen girls and creating AI-generated deepfake images at unprecedented rates. This year, for example, New Jersey high schooler Francesca Mani spoke at a news conference alongside Congressman Joe Morelle, and she explained that nonconsensual AI-generated intimate images of her, along with the images of 30 other girls at her school, were shared on the internet.
While men are victim as well, women are disproportionately impacted by the spread of AI-generated and altered intimate images. A MIT Technology Review report revealed that the vast majority of deepfakes target women. Between 90% and 95% of these videos are non-consensual porn involving women, a report from Sensity AI found. Unfortunately, the current legal and regulatory framework in the U.S. offers victims of such abuse little recourse.
How does deepfake technology work?