BACK TO ALL BLOGS

The TAKE IT DOWN Act’s Requirements Take Effect in May: What Platforms Need to Do Now

On May 19, 2026, platform requirements under The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, also known as TAKE IT DOWN, go into effect, marking an important shift in how organizations handle nonconsensual intimate imagery on their online platforms. Hive endorsed the legislation in April 2025 alongside other technology companies and organizations.

Nonconsensual content continues to appear across online platforms, creating serious risks for individuals and enforcement challenges for organizations. GenerativeAI tools changed how this material is created and distributed. Images and videos no longer need to originate from real events: artificial media can now depict realistic scenarios involving real people.

The TAKE IT DOWN Act introduces federal restrictions that apply to both real and AI-generated intimate imagery shared without consent. It also sets clearer expectations for how platforms must respond when this type of content is reported.

What the Act establishes 

Subject to certain exclusions, the TAKE IT DOWN Act prohibits publishing intimate visual depictions without consent under the following circumstances:

  • For adults, where knowing publication of an identifiable individual is intended to cause harm or results in harm. For authentic imagery, the law also covers content created or obtained under circumstances where a person had a reasonable expectation of privacy.
  • For minors, where publication is intended to abuse, harass, or sexually exploit the child.

Violations may lead to restitution, fines, and criminal penalties. The Act also prohibits certain threats to distribute nonconsensual intimate imagery. Platforms that host user-generated content must provide a way for individuals to report nonconsensual intimate imagery. After receiving valid notice, they must remove qualifying content within 48 hours and must make reasonable efforts to remove known duplicates. Noncompliance may subject platforms to US regulatory enforcement.

The U.S. is not the only country implementing such legislation, as regulatory requirements are also emerging internationally. For example, the United Kingdom adopted similar legislation establishing a 48-hour removal process for abusive intimate content.

Building safety infrastructure for the synthetic media age

As platforms adapt to the requirements introduced by the TAKE IT DOWN Act, detection and moderation systems should play a central role in enabling timely and consistent enforcement. Meeting these obligations reliably and at scale are likely to require technologies capable of accurately identifying and evaluating potentially violative content. Hive’s best-in-class solutions support these workflows across several key areas:

Visual Moderation and Vision Language Model: Detect nudity and sexually suggestive images and videos in both authentic and AI-generated media, helping platforms surface intimate content that may fall within the scope of nonconsensual imagery reports.

AI-Generated & Deepfake Detection: Automatically identify whether images or videos are AI-generated and identify manipulated media, including deepfakes and face swaps, assisting platforms evaluate reports involving synthetic or altered intimate imagery that can fall under the TAKE IT DOWN Act.

Celebrity Recognition: Flag the visual likeness of well-known public figures in image uploads, enabling platforms to detect potential misuse of identity or artificial impersonations.

Reverse Image Search: Identify duplicate images on the platform and matching images already circulating online. This is particularly important because identifying and removing duplicates is itself a direct requirement in the legislation. 

CSAM Detection API: Detect known and novel child sexual abuse material across images and video, including AI-generated content, using hash matching and AI classification for fast detection and escalation.

Demographic Attributes API: Estimate facial attributes such as age, supporting safety and moderation workflows involving content that may include minors.

Moderation Dashboard:  Streamline user reports, case management, and human review, enabling trust and safety teams to evaluate reported content efficiently and meet response timelines such as the Act’s 48-hour removal requirement.

The strength and reliability of these capabilities have also driven adoption in some of the most demanding operational settings. Our technology was selected among 36 competing solutions for a Department of War contract to support the U.S. Intelligence Community for deepfake detection of video, image, and audio content. The Department of Homeland Security’s Cyber Crimes Center  also deploys Hive’s AI-Generated and Deepfake Detection technology to support its investigations against child exploitation across international borders.

As regulatory frameworks and synthetic content risks continue to evolve, detection and moderation capabilities are becoming core components of platform operations. Reliable systems for assessing manipulated and AI-generated media will increasingly shape how platforms maintain safety, compliance, and user trust.

For more information on Hive’s solutions, reach out to sales@thehive.ai or visit: https://thehive.ai/

bg-image

Ready to build something?