BACK TO ALL BLOGS

Hive Joins in Endorsing the NO FAKES Act

Contents

Today, Hive joins other leading technology companies and trade organizations in endorsing the NO FAKES Act — a bipartisan piece of legislation aimed at addressing the misuse of generative AI technologies by bad actors.

The legislation has been introduced by U.S. Senators Marsha Blackburn (R-Tenn.), Chris Coons (D-Del.), Thom Tillis (R-N.C.), and Amy Klobuchar (D-Minn.), along with U.S. Representatives Maria Salazar (R-Fla.) and Madeleine Dean (D-Penn.). Read the full letter here.

The NO FAKES Act

The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2025 is a bipartisan bill that would protect the voice and visual likeness of all individuals from unauthorized recreations by generative artificial intelligence.

This Act, aimed at addressing the use of non-consensual digital replications for audiovisual works or sound recordings, will hold individuals or companies liable for the production of such content and hold platforms liable for knowingly hosting such content.

As a leading provider of AI solutions to hundreds of the world’s largest and most innovative organizations, Hive understands firsthand the extraordinary benefits that generative AI technology provides. However, we also recognize that bad actors are relentless in their attempts to exploit it. 

As Kevin Guo, Hive’s CEO and Cofounder, explains in the endorsement letter:

“The development of AI-generated media and AI detection technologies must evolve in parallel,” said Kevin Guo, CEO and cofounder of Hive. “We envision a future where AI-generated media is created with permission, clearly identified, and appropriately credited. We stand firmly behind the NO FAKES Act as a fundamental step in establishing oversight while keeping pace with advancements in artificial intelligence to protect public trust and creative industries alike.”

https://www.blackburn.senate.gov/2025/4/technology/blackburn-coons-salazar-dean-colleagues-introduce-no-fakes-act-to-protect-individuals-and-creators-from-digital-replicas

To this end, Hive has commercialized AI-powered solutions to help digital platforms proactively detect the potential misuse of AI-generated and synthetic content. 

Detecting AI-Generated and Deepfake Content

Hive’s AI-generated and deepfake detection models can help technology companies identify unauthorized digital replications of audiovisual likeness in order to comply with the provisions outlined in the NO FAKES Act. 

The endorsement letter references the high-profile example of the song “Heart on My Sleeve,” featuring unauthorized AI-generated replicas of the voices of Drake and The Weeknd, which was played hundreds of thousands of times before being identified as fake. Streaming platforms and record labels will be able to leverage Hive’s AI-Generated Music model to proactively detect such instances of unauthorized recreations and swiftly remove them.

While the harmful effects of unauthorized AI-generated content go far beyond celebrities, Hive also offers a Celebrity Recognition API, which detects the visual likeness of a broad index of well known public figures, from celebrities and influencers to politicians and athletes. Hive’s Celebrity Recognition API can help platforms proactively identify bad actors misusing celebrity visual likeness to disseminate false information or unauthorized advertisements, such as the recent unauthorized synthetic replica of Tom Hanks promoting a dental plan.

Hive’s AI-generated and deepfake detection solutions are already trusted by the United States Department of Defense to combat sophisticated disinformation campaigns and synthetic media threats. 

For more information on Hive’s AI-Generated and Deepfake Detection solutions, reach out to sales@thehive.ai or visit: https://thehive.ai/apis/ai-generated-content-classification

BACK TO ALL BLOGS

Streamline CSAM Reports with Moderation Dashboard’s NCMEC Integration

Contents

Hive is excited to announce that we have integrated the National Center for Missing & Exploited Children’s (NCMEC) CyberTipline into Moderation Dashboard, streamlining the process of submitting child sexual abuse material (CSAM) reports. This feature is now available to all Moderation Dashboard customers with valid NCMEC credentials.

Ensuring Child Safety Online

The National Center for Missing & Exploited Children is a non-profit organization dedicated to protecting children from all forms of exploitation and abuse. All electronic communication service providers are required under U.S. federal law to report any known CSAM on their platforms to NCMEC’s CyberTipline—a centralized system for receiving and processing CSAM reports. These reports are later shared with law enforcement and relevant service providers so they can take further action.

Throughout our endeavors and partnerships, Hive’s commitment to online safety has been unwavering. We built this integration to help automate the reporting process, simplify our customers’ workflows, and ensure that their platforms can comply with applicable law.

Integration Workflow

A step-by-step sample integration workflow is outlined, starting from when a user uploads an image to the platform and ending in the subsequent actions a moderator can take. For a more detailed guide on how the reporting process works, refer to the following documentation.

  1. A user uploads an image to the platform.
  2. The image is processed by Hive’s proprietary CSAM Detection API, powered by Thorn—a leading nonprofit that builds technology to defend children from sexual abuse. To learn more about our Thorn partnership, read our blog posts linked below:
  3. If there is a likelihood of CSAM detected in the image, this image will surface as a link in the CSAM Review Feed. Once the link is clicked, the media will open in a new browser tab for the moderator to review. Moderation Dashboard will never display CSAM content directly within the Review Feed.
  4. From the review feed, the moderator can take two actions:
    • Perform an enforcement action (e.g. banning the user or deleting the post). A webhook is sent to the customer’s server afterward, containing the moderator’s chosen enforcement action as well as the post and user metadata, all of which are used to take the content down.
    • The system will automatically create a report, which the moderator can send to NCMEC by clicking the “Submit” button within the Review Feed. After the report is submitted, the system creates an internal log to track the report (e.g. submission date and time, as well as storing the response from NCMEC).
“Report to NCMEC” button within Review Feed

NCMEC Report Contents

Customers can pre-fill information fields that are constant across reports. These fields will be automatically populated for each report, reducing effort on the customer’s end. To provide our customers with full transparency, the report sent to NCMEC includes: the moderator’s information, the company’s information, the potential CSAM content, and the incident date and time.

Moderator information fields

If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.

BACK TO ALL BLOGS

Hive to be Lead Sponsor of Trust & Safety Summit 2025

We are thrilled to announce that Hive is the lead sponsor of the Trust & Safety Summit 2025.

As Europe’s premier Trust & Safety conference, this summit is designed to empower T&S leaders to tackle operational and regulation challenges, providing them with both actionable insights and future-focused strategies. The summit will be held Tuesday, March 25th and Wednesday, March 26th at the Hilton London Syon Park, UK.

The 2-day event will explore themes such as regulatory preparedness, scaling trust and safety solutions, and best practices for effective content moderation. An incredible selection of programming will include expert-led panels, interactive workshops and networking events.

Hive’s CEO Kevin Guo will deliver the keynote presentation on “The Next Frontier of Content Moderation”, covering topics like multi-modal LLMs and detecting AI generated content. Additionally, Hive will host two panels during the event: 

  • Hyperscaling Trust & Safety: Navigating Growth While Maintaining Integrity. Hive will be discussing best practices for scaling trust & safety systems for online platforms experiencing hypergrowth.
  • Harnessing AI to Detect Unknown CSAM: Innovations, Challenges, and the Path Forward. Hive will be joined by partners Thorn and IWF to discuss recent advancements in CSAM detection solutions.

As the lead sponsor of the T&S Summit 2025, we are furthering our commitment to making the internet a safer place. Today, Hive’s comprehensive moderation stack empowers Trust & Safety teams of all sizes to scale their moderation workflows with both pre-trained and customizable AI models, flexible LLM-based moderation, and a moderation dashboard for streamlined enforcement of policies. 

We look forward to welcoming you to the Trust & Safety Summit 2025. If you’re interested in attending the conference, please reach out to your Hive account manager or sales@thehive.ai. Prospective conference attendees can also find more details and ticket information here. For a detailed breakdown of summit programming, download the agenda here

To learn more about what we do at Hive, please reach out to our sales team or contact us here for further questions.

BACK TO ALL BLOGS

Protecting Children’s Online Safety with Internet Watch Foundation

Contents

Hive is proud to announce that we are partnering with Internet Watch Foundation (IWF), a non-profit organization working to stop child sexual abuse online. We will be integrating their proprietary keyword and URL lists into our default Text Moderation model for all customers at no additional cost.

Our Joint Commitment to Child Safety

Making the internet a safer place is one of Hive’s core values. Our partnership with IWF allows us to use their specialized knowledge to bolster our leading content moderation tools, helping our customers better detect and flag online records of child sexual abuse. 

As part of our partnership, Hive will now include the following two IWF wordlists as part of our default Text Moderation model for all customers at no additional cost:

  1. Keyword List: This wordlist contains known terms and code words that offenders use to exchange child sexual abuse material (CSAM) in a discreet manner. More information can be found here.
  2. URL List: This wordlist contains a comprehensive list of webpages that are confirmed to host CSAM in image or video form. More information can be found here.

With these lists, customers can now use Text Moderation to catch various keywords and URLs associated with CSAM. These lists are dynamic and will be updated on a daily basis.

A sample Text Moderation response can be found below. We recommend that all customers perform an initial evaluation to first determine if the list’s keywords are helpful for their specific use case. For more information, refer to the following documentation.

Integration with Thorn Safer Match

Our partnership also grants us access to IWF’s hash lists. Previously, we partnered with Thorn, allowing customers to integrate their Safer Match hash matching technology for CSAM detection using Hive APIs.

We can now match against IWF’s hash lists with Thorn Safer Match. If you want this feature supported, please reach out to our sales team (sales@thehive.ai).

By combining our leading moderation tools with IWF’s specialized expertise, we hope that we can create a safer internet for children worldwide.

For more details, you can find our recent press release here, as well as our CEO Kevin Guo’s interview with Rashi Shrivastava of Forbes here. If you’re interested in learning more about what we do, please reach out to our sales team or contact us here for further questions.

BACK TO ALL BLOGS

Forbes

BACK TO ALL BLOGS

State of the Deepfake: Trends & Threat Forecast for 2025