
President Trump signs groundbreaking ‘Take It Down Act’ to combat deepfakes and revenge porn, creating new protections for victims of digital exploitation.
At a Glance
- President Trump signed the bipartisan “Take It Down Act” criminalizing nonconsensual sharing of explicit imagery online
- The law requires platforms to remove such content within 48 hours of notification
- The legislation specifically targets AI-generated deepfakes, which have created new avenues for harassment
- First Lady Melania Trump supported the bill, which was introduced by Senators Ted Cruz and Amy Klobuchar
- Critics express concerns about potential First Amendment issues and overly broad language
New Federal Protections Against Digital Exploitation
President Donald Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, commonly known as the “Take It Down Act,” during a ceremony in the White House Rose Garden. The bipartisan legislation criminalizes the publication of intimate images without consent, including both authentic photos and AI-generated deepfakes. This landmark law establishes federal protections for victims of digital exploitation and creates a legal framework for removing harmful content from online platforms.
The legislation addresses the growing problem of technology being weaponized against individuals, particularly women. Online platforms will now be required to remove flagged content within 48 hours after receiving notification from victims. Perpetrators who publish or threaten to publish intimate imagery without consent can face criminal charges under the new law. The act received strong bipartisan support in Congress before reaching the president’s desk, signaling widespread recognition of the urgent need to address this form of digital harassment.
Trump signed a law criminalizing the spread of nonconsensual intimate imagery, including AI-generated deepfakes and revenge porn.https://t.co/Sz7Mofxz9X
— POLITICO (@politico) May 19, 2025
Presidential Response to AI-Generated Threats
During the signing ceremony, President Trump described the situation as “horribly wrong” and emphasized the need for stronger legal protections in the digital age. The legislation comes at a time when artificial intelligence tools have made creating convincing deepfakes increasingly accessible, opening new avenues for harassment and exploitation. The president specifically highlighted how technological advancements have been misused to target and harm women through the creation and distribution of explicit images without their knowledge or consent.
“With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will,” Trump said during the signing ceremony.
First Lady Melania Trump played a significant role in supporting the legislation. The bill was introduced by an unlikely bipartisan pair – Republican Senator Ted Cruz of Texas and Democratic Senator Amy Klobuchar of Minnesota. Industry leaders, including Meta, which owns Facebook and Instagram, have expressed support for the legislation. Meta spokesman Andy Stone emphasized the company’s commitment to preventing the sharing of intimate images without consent, acknowledging the devastating impact such experiences can have on victims.
Victims of revenge porn and deepfake intimate images now have greater protection from online predators!
“The Take It Down Act is a landmark victory for victims who have been targeted through real and AI-generated non-consensual intimate imagery online.” https://t.co/VdLH6yVMcm
— Heritage Foundation (@Heritage) May 19, 2025
Balancing Protection and Free Speech
Despite widespread support, the Take It Down Act has drawn criticism from some free speech advocates and civil liberties organizations. The Electronic Frontier Foundation and the Cyber Civil Rights Initiative have expressed concerns about the law’s broad language and potential for misuse. Critics argue that the legislation could inadvertently lead to censorship of legitimate content and may place excessive burdens on smaller online platforms that lack the resources to implement comprehensive content moderation systems.
“While the bill is meant to address a serious problem, good intentions alone are not enough to make good policy,” noted the Electronic Frontier Foundation in their analysis of the legislation.
The law is viewed by many as a significant step toward establishing more comprehensive regulations for social media platforms and artificial intelligence technologies. However, questions remain about implementation challenges, including how platforms will verify claims and handle potential false reports. The legislation’s effectiveness will likely depend on how courts interpret its provisions and how technology companies adapt their content moderation practices to comply with the new requirements while respecting First Amendment protections.