BACK TO ALL BLOGS Introducing New Free X Bot To Analyze and Detect AI-Generated Content For All Users HiveNovember 11, 2025November 12, 2025 Hive is excited to announce the launch of a new bot on X that uses our industry-leading AI models to analyze and share results in real time, completely free for users. How it works Anyone on X can simply tag @hive_ai and ask whether a post, image, video, or audio clip is AI-generated. There’s plenty of flexibility in how you phrase your question, the bot understands a wide range of prompts, such as: Is this AI-generated?Is this video genuine? The audio sounds AI generated.Is this real or is this another AI-generated photo? Hive’s detection models will automatically analyze the media and reply in real time with the results, providing them directly in-thread. In the reply, Hive provides confidence scores for whether the input is likely to be AI-Generated or Deepfake. Videos and audio files will also return frame-by-frame analysis. Finally, Hive identifies probabilities for which generative engines likely created the content (such as Sora2, GPT, etc.). Accessible AI detection for X Users As AI-generated and deepfake content becomes harder to distinguish from reality, tools like this bot are essential for restoring trust and transparency online. Every day, manipulated media spreads across social platforms, making it easy for misinformation to take hold. By making detection accessible to everyone, we’re helping rebuild confidence in the content we see and share. Beyond that, this launch marks an important step in bringing Hive’s enterprise-grade detection technology to everyday users. Best-in-Class Technology Today, Hive’s industry-leading AI-generated and deepfake content detection technology is trusted across both the public and private sectors. In 2024, an independent research study identified Hive as the “clear winner” in a study that found our AI-generated image and video detection model outperformed competing models as well as human expert analysis. Our technology was also selected among 36 competing solutions for a Department of War contract to support the U.S. Intelligence Community for deepfake detection of video, image, and audio content. More recently, the Department of Homeland Security’s Cyber Crimes Center has deployed Hive’s AI-Generated and Deepfake Detection technology to support its investigations. With this bot, we’re giving all users the power to verify what’s real. Try it out by tagging @hive_ai on X. Learn More You can upload individual media files to check for AI-generation and deepfake content at https://hivedetect.ai. Learn more about our enterprise AI models here.
BACK TO ALL BLOGS Expanding Hive’s CSAM Detection Suite with Text Classification, Powered by Thorn HiveJuly 21, 2025November 11, 2025 Contents Our Commitment to Online SafetyExpanding Our Thorn PartnershipHow the Classifier WorksProactively Combating CSAM at Scale We are excited to announce that Hive’s partnership with Thorn is expanding to include a new CSE Text Classifier API. Offering advanced AI-powered text detection capabilities, this API helps trust and safety teams proactively combat text-based child sexual exploitation at scale. Our Commitment to Online Safety Making the internet safer for everyone is at the core of Hive’s mission. Our innovative approach to content moderation and platform integrity has propelled us to become a leading voice in Trust and Safety. Over the last several years, we’ve greatly expanded our content moderation product suite. While our content moderation tools reduce human exposure to harmful content across many categories, preventing online child sexual abuse requires specialized expertise and technology. Last year, we announced our partnership with Thorn, an innovative nonprofit that transforms how children are protected from sexual abuse and exploitation in the digital age. Our enterprise-grade, cloud-based APIs allow us to serve Thorn’s proprietary technology to customers at a large scale. Expanding Our Thorn Partnership Under our Thorn partnership, we previously released our CSAM Detection API. This API runs two detection technologies—hash matching and an AI classifier—to detect both known and novel child sexual abuse material (CSAM) across image and video inputs. Today, we’re expanding this partnership with the CSE (Child Sexual Exploitation) Text Classifier API, which has been highly requested by many of our current Hive customers. This classifier complements our CSAM detection suite by filling a critical content gap for use cases such as detecting text-based child sexual exploitation across user messaging and conversations. With this release, Hive and Thorn can provide customers with even broader detection coverage across text, image, and video. How The Classifier Works The CSE Text Classifier API detects suspected child exploitation in both English and Spanish. Each text sequence submitted is tokenized before being passed into the text classifier. The classifier then returns the text sequence’s scores for each label. There are seven possible labels: CSA (Child Sexual Abuse) Discussion: This is a broad category, encompassing text fantasizing about or expressing outrage toward the subject, as well as text discussing sexually harming children in an offline or online setting.Child Access: Text discussing sexually harming children in an offline or online setting.CSAM: Text related to users talking about, producing, asking for, transacting in, and sharing child sexual abuse material.Has Minor: Text where a minor is unambiguously being referenced.Self-Generated Content: Text where users are talking about producing self-generated content, offering to share their self-generated content with others, or generally talking about self-generated images and/or videos.Sextortion: Text related to sextortion, which is where a perpetrator threatens to spread a victim’s intimate imagery in order to extort additional actions from them. This encompasses messages where an offender is sextorting another user, users talking about being sextorted, as well as users reporting sextortion either for themselves or on behalf of others.Not Pertinent: The text sequence does not flag any of the above labels. If any of these labels receive a score that is above their internally set threshold, all scores will be returned in the pertinent_labels section. Below is an example of a pertinent sample response. A given text sequence might receive high scores across multiple labels. In these cases, it may be helpful to combine the label definitions to better understand the situation at hand and determine what cases are actionable with regard to your moderation team’s specific policies. For instance, text sequences scoring high on both CSAM and Child Access may be from individuals potentially abusing children offline and producing CSAM. Proactively Combating CSAM at Scale Safeguarding platforms from CSAM demands scalable solutions. We’re excited to expand our partnership and power more of Thorn’s advanced technology through our enterprise-grade APIs, helping more platforms proactively and comprehensively combat CSAM and CSE text. If you have further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS Hive Joins in Endorsing the NO FAKES Act HiveApril 9, 2025July 21, 2025 Contents The NO FAKES ActDetecting AI-Generated and Deepfake Content Today, Hive joins other leading technology companies and trade organizations in endorsing the NO FAKES Act — a bipartisan piece of legislation aimed at addressing the misuse of generative AI technologies by bad actors. The legislation has been introduced by U.S. Senators Marsha Blackburn (R-Tenn.), Chris Coons (D-Del.), Thom Tillis (R-N.C.), and Amy Klobuchar (D-Minn.), along with U.S. Representatives Maria Salazar (R-Fla.) and Madeleine Dean (D-Penn.). Read the full letter here. The NO FAKES Act The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2025 is a bipartisan bill that would protect the voice and visual likeness of all individuals from unauthorized recreations by generative artificial intelligence. This Act, aimed at addressing the use of non-consensual digital replications for audiovisual works or sound recordings, will hold individuals or companies liable for the production of such content and hold platforms liable for knowingly hosting such content. As a leading provider of AI solutions to hundreds of the world’s largest and most innovative organizations, Hive understands firsthand the extraordinary benefits that generative AI technology provides. However, we also recognize that bad actors are relentless in their attempts to exploit it. As Kevin Guo, Hive’s CEO and Cofounder, explains in the endorsement letter: “The development of AI-generated media and AI detection technologies must evolve in parallel,” said Kevin Guo, CEO and cofounder of Hive. “We envision a future where AI-generated media is created with permission, clearly identified, and appropriately credited. We stand firmly behind the NO FAKES Act as a fundamental step in establishing oversight while keeping pace with advancements in artificial intelligence to protect public trust and creative industries alike.”https://www.blackburn.senate.gov/2025/4/technology/blackburn-coons-salazar-dean-colleagues-introduce-no-fakes-act-to-protect-individuals-and-creators-from-digital-replicas To this end, Hive has commercialized AI-powered solutions to help digital platforms proactively detect the potential misuse of AI-generated and synthetic content. Detecting AI-Generated and Deepfake Content Hive’s AI-generated and deepfake detection models can help technology companies identify unauthorized digital replications of audiovisual likeness in order to comply with the provisions outlined in the NO FAKES Act. The endorsement letter references the high-profile example of the song “Heart on My Sleeve,” featuring unauthorized AI-generated replicas of the voices of Drake and The Weeknd, which was played hundreds of thousands of times before being identified as fake. Streaming platforms and record labels will be able to leverage Hive’s AI-Generated Music model to proactively detect such instances of unauthorized recreations and swiftly remove them. While the harmful effects of unauthorized AI-generated content go far beyond celebrities, Hive also offers a Celebrity Recognition API, which detects the visual likeness of a broad index of well known public figures, from celebrities and influencers to politicians and athletes. Hive’s Celebrity Recognition API can help platforms proactively identify bad actors misusing celebrity visual likeness to disseminate false information or unauthorized advertisements, such as the recent unauthorized synthetic replica of Tom Hanks promoting a dental plan. Hive’s AI-generated and deepfake detection solutions are already trusted by the United States Department of Defense to combat sophisticated disinformation campaigns and synthetic media threats. For more information on Hive’s AI-Generated and Deepfake Detection solutions, reach out to sales@thehive.ai or visit: https://thehive.ai/apis/ai-generated-content-classification
BACK TO ALL BLOGS Streamline CSAM Reports with Moderation Dashboard’s NCMEC Integration HiveFebruary 26, 2025April 9, 2025 Contents Ensuring Child Safety OnlineIntegration WorkflowNCMEC Report Contents Hive is excited to announce that we have integrated the National Center for Missing & Exploited Children’s (NCMEC) CyberTipline into Moderation Dashboard, streamlining the process of submitting child sexual abuse material (CSAM) reports. This feature is now available to all Moderation Dashboard customers with valid NCMEC credentials. Ensuring Child Safety Online The National Center for Missing & Exploited Children is a non-profit organization dedicated to protecting children from all forms of exploitation and abuse. All electronic communication service providers are required under U.S. federal law to report any known CSAM on their platforms to NCMEC’s CyberTipline—a centralized system for receiving and processing CSAM reports. These reports are later shared with law enforcement and relevant service providers so they can take further action. Throughout our endeavors and partnerships, Hive’s commitment to online safety has been unwavering. We built this integration to help automate the reporting process, simplify our customers’ workflows, and ensure that their platforms can comply with applicable law. Integration Workflow A step-by-step sample integration workflow is outlined, starting from when a user uploads an image to the platform and ending in the subsequent actions a moderator can take. For a more detailed guide on how the reporting process works, refer to the following documentation. A user uploads an image to the platform.The image is processed by Hive’s proprietary CSAM Detection API, powered by Thorn—a leading nonprofit that builds technology to defend children from sexual abuse. To learn more about our Thorn partnership, read our blog posts linked below:Matching Against CSAM: Hive’s Innovative Integration with Thorn’s Safer MatchExpanding Our CSAM Detection APIIf there is a likelihood of CSAM detected in the image, this image will surface as a link in the CSAM Review Feed. Once the link is clicked, the media will open in a new browser tab for the moderator to review. Moderation Dashboard will never display CSAM content directly within the Review Feed.From the review feed, the moderator can take two actions:Perform an enforcement action (e.g. banning the user or deleting the post). A webhook is sent to the customer’s server afterward, containing the moderator’s chosen enforcement action as well as the post and user metadata, all of which are used to take the content down.The system will automatically create a report, which the moderator can send to NCMEC by clicking the “Submit” button within the Review Feed. After the report is submitted, the system creates an internal log to track the report (e.g. submission date and time, as well as storing the response from NCMEC). “Report to NCMEC” button within Review Feed NCMEC Report Contents Customers can pre-fill information fields that are constant across reports. These fields will be automatically populated for each report, reducing effort on the customer’s end. To provide our customers with full transparency, the report sent to NCMEC includes: the moderator’s information, the company’s information, the potential CSAM content, and the incident date and time. Moderator information fields If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
BACK TO ALL BLOGS Super Bowl LIX – As Seen By AI HiveFebruary 10, 2025February 26, 2025 Next-day insights on the latest trends in marketing and culture, powered by Hive’s AI models. For more detailed analytics, download the full report below. Download Full Report Key Insights Were They Here Last Year? Brands not active during last year’s Super Bowl made up 51% of the airtime for nationally-televised ads during this year’s Big Game. Meet My Famous Friends Celebrity integration into Super Bowl commercials has become a cornerstone of the creative decisions for many brands. This year was no different, with 60% of ads featuring at least 1 celebrity, an increase from 50% last year. Actors and actresses continue to be the most common type of celebrity to be cast in Super Bowl commercials. EVs Unplugged This year’s Super Bowl featured the lowest count (1) and lowest percentage of auto ads (50%) referencing electric vehicles since at least 2020. A Part Of The Game 27 brands earned more than 5 seconds of screen time within the game and postgame telecast (excluding pregame and commercials), totaling almost two hours of cumulative screentime worth $247.8M in equivalent media value. Request a Demo
BACK TO ALL BLOGS Hive to be Lead Sponsor of Trust & Safety Summit 2025 HiveFebruary 5, 2025March 17, 2025 We are thrilled to announce that Hive is the lead sponsor of the Trust & Safety Summit 2025. As Europe’s premier Trust & Safety conference, this summit is designed to empower T&S leaders to tackle operational and regulation challenges, providing them with both actionable insights and future-focused strategies. The summit will be held Tuesday, March 25th and Wednesday, March 26th at the Hilton London Syon Park, UK. The 2-day event will explore themes such as regulatory preparedness, scaling trust and safety solutions, and best practices for effective content moderation. An incredible selection of programming will include expert-led panels, interactive workshops and networking events. Hive’s CEO Kevin Guo will deliver the keynote presentation on “The Next Frontier of Content Moderation”, covering topics like multi-modal LLMs and detecting AI generated content. Additionally, Hive will host two panels during the event: Hyperscaling Trust & Safety: Navigating Growth While Maintaining Integrity. Hive will be discussing best practices for scaling trust & safety systems for online platforms experiencing hypergrowth.Harnessing AI to Detect Unknown CSAM: Innovations, Challenges, and the Path Forward. Hive will be joined by partners Thorn and IWF to discuss recent advancements in CSAM detection solutions. As the lead sponsor of the T&S Summit 2025, we are furthering our commitment to making the internet a safer place. Today, Hive’s comprehensive moderation stack empowers Trust & Safety teams of all sizes to scale their moderation workflows with both pre-trained and customizable AI models, flexible LLM-based moderation, and a moderation dashboard for streamlined enforcement of policies. We look forward to welcoming you to the Trust & Safety Summit 2025. If you’re interested in attending the conference, please reach out to your Hive account manager or sales@thehive.ai. Prospective conference attendees can also find more details and ticket information here. For a detailed breakdown of summit programming, download the agenda here. To learn more about what we do at Hive, please reach out to our sales team or contact us here for further questions.