On Feb. 23, 2026, a coordinated group of 61 authorities issued a global call for AI deepfake privacy safeguards, warning that realistic AI-generated images and videos can depict identifiable people without their knowledge or consent. Regulators said the risk has intensified as image and video generation becomes embedded inside widely accessible social platforms.
Alongside the joint statement, multiple national actions unfolded in January. Indonesia moved to block the tool described in reporting, while Malaysia restricted access after citing unresolved “inherent risks.” The UK launched a formal Online Safety Act investigation and signaled readiness to criminalize platforms that supply abuse-enabling tools. California opened a state probe focused on whether non-consensual sexually explicit material violates civil rights laws and Japan demanded immediate technical improvements to stop “undressing” real individuals in photos.
Regulators emphasized two expectations: prevention at design stage and fast, user-accessible removal pathways.
- Build safeguards from the outset.
- Provide meaningful transparency on capabilities and limits.
- Offer rapid takedown and remedies, especially for children.