AI image tools are being exploited to create realistic bikini deepfakes of women without consent, despite existing safety rules and a growing legislative response. A December 23, 2025 investigation by WIRED reported that users on Reddit shared “jailbreak” methods to bypass safeguards in image features tied to Google’s Gemini and OpenAI, enabling the generation of photorealistic images from photos of clothed women, according to the sources provided.
Platforms have moved to curb the activity, but enforcement has struggled to keep pace with rapid technical advances. Reddit removed offending threads and banned the more than 200,000-member r/ChatGPTjailbreak subreddit on December 17, while Google and OpenAI reiterated bans on explicit content and non-consensual likenesses, issuing account penalties where violations were found, the sources said. Still, experts warn the problem is widespread: research cited by the sources indicates that 96% of deepfakes are non-consensual and 99% of sexual deepfakes target women, with Corynne McSherry of the Electronic Frontier Foundation describing such outputs as “abusively sexualized” harms.
Lawmakers are now pressing for tougher consequences. In the United States, the Deepfake Liability Act introduced in December 2025 would remove platform protections when victim reports are ignored, building on the 2025 Take It Down Act that criminalized creation and distribution, according to the sources provided. In the United Kingdom, officials announced plans on December 17 to criminalize the creation of deepfakes, ban “nudification” apps, and strengthen online safety for women and girls, led by Tech Secretary Liz Kendall. The next step, the sources say, will be whether enforcement and new laws can keep pace with fast-moving AI tools.