BACK TO ALL BLOGS TechCrunch Feature: Take It Down Act & Hive’s Role in Proactive Detection HiveMay 27, 2025August 15, 2025
BACK TO ALL BLOGS Understanding the NO FAKES Act: Protecting Against Unauthorized Deepfakes & Hive’s Endorsement HiveApril 9, 2025August 15, 2025
BACK TO ALL BLOGS Hive Joins in Endorsing the NO FAKES Act HiveApril 9, 2025July 21, 2025 Contents The NO FAKES ActDetecting AI-Generated and Deepfake Content Today, Hive joins other leading technology companies and trade organizations in endorsing the NO FAKES Act — a bipartisan piece of legislation aimed at addressing the misuse of generative AI technologies by bad actors. The legislation has been introduced by U.S. Senators Marsha Blackburn (R-Tenn.), Chris Coons (D-Del.), Thom Tillis (R-N.C.), and Amy Klobuchar (D-Minn.), along with U.S. Representatives Maria Salazar (R-Fla.) and Madeleine Dean (D-Penn.). Read the full letter here. The NO FAKES Act The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2025 is a bipartisan bill that would protect the voice and visual likeness of all individuals from unauthorized recreations by generative artificial intelligence. This Act, aimed at addressing the use of non-consensual digital replications for audiovisual works or sound recordings, will hold individuals or companies liable for the production of such content and hold platforms liable for knowingly hosting such content. As a leading provider of AI solutions to hundreds of the world’s largest and most innovative organizations, Hive understands firsthand the extraordinary benefits that generative AI technology provides. However, we also recognize that bad actors are relentless in their attempts to exploit it. As Kevin Guo, Hive’s CEO and Cofounder, explains in the endorsement letter: “The development of AI-generated media and AI detection technologies must evolve in parallel,” said Kevin Guo, CEO and cofounder of Hive. “We envision a future where AI-generated media is created with permission, clearly identified, and appropriately credited. We stand firmly behind the NO FAKES Act as a fundamental step in establishing oversight while keeping pace with advancements in artificial intelligence to protect public trust and creative industries alike.”https://www.blackburn.senate.gov/2025/4/technology/blackburn-coons-salazar-dean-colleagues-introduce-no-fakes-act-to-protect-individuals-and-creators-from-digital-replicas To this end, Hive has commercialized AI-powered solutions to help digital platforms proactively detect the potential misuse of AI-generated and synthetic content. Detecting AI-Generated and Deepfake Content Hive’s AI-generated and deepfake detection models can help technology companies identify unauthorized digital replications of audiovisual likeness in order to comply with the provisions outlined in the NO FAKES Act. The endorsement letter references the high-profile example of the song “Heart on My Sleeve,” featuring unauthorized AI-generated replicas of the voices of Drake and The Weeknd, which was played hundreds of thousands of times before being identified as fake. Streaming platforms and record labels will be able to leverage Hive’s AI-Generated Music model to proactively detect such instances of unauthorized recreations and swiftly remove them. While the harmful effects of unauthorized AI-generated content go far beyond celebrities, Hive also offers a Celebrity Recognition API, which detects the visual likeness of a broad index of well known public figures, from celebrities and influencers to politicians and athletes. Hive’s Celebrity Recognition API can help platforms proactively identify bad actors misusing celebrity visual likeness to disseminate false information or unauthorized advertisements, such as the recent unauthorized synthetic replica of Tom Hanks promoting a dental plan. Hive’s AI-generated and deepfake detection solutions are already trusted by the United States Department of Defense to combat sophisticated disinformation campaigns and synthetic media threats. For more information on Hive’s AI-Generated and Deepfake Detection solutions, reach out to sales@thehive.ai or visit: https://thehive.ai/apis/ai-generated-content-classification
BACK TO ALL BLOGS State of the Deepfake: Trends & Threat Forecast for 2025 HiveJanuary 16, 2025June 11, 2025
BACK TO ALL BLOGS “Clear Winner”: Study Shows Hive’s AI-Generated Image Detection API is Best-in-Class HiveSeptember 10, 2024July 29, 2025 Contents Navigating an Increasingly Generative WorldStructuring the StudyEvaluation Methods and FindingsFinal Thoughts Moving Forward Navigating an Increasingly Generative World To the untrained eye, distinguishing human-created art from AI-generated content can be difficult. Hive’s commitment to providing customers with API-accessible solutions for challenging problems led to the creation of our AI-Generated Image and Video Detection API, which classifies images as human-created or AI-generated. Our model was evaluated in an independent study conducted by Anna Yoo Jeong Ha and Josephine Passananti from the University of Chicago, which sought to determine who was more effective at classifying images as AI-generated: humans or automated detectors. Ha and Passananti’s study addresses a growing problem within the generative AI space: As generative AI models become more advanced, the boundary between human-created art and AI-generated images has become increasingly indistinguishable. With such powerful tools being accessible to the general public, various legal and ethical concerns have been raised regarding the misuse of said technology. Such concerns are pertinent to address because the misuse of generative AI models negatively impacts both society at large and the AI models themselves. Bad actors have used AI-generated images for harmful purposes, such as spreading misinformation, committing fraud, or scamming individuals and organizations. As only human-created art is eligible for copyright, businesses may attempt to bypass the law by passing off AI-generated images as human-created. Moreover, multiple studies (on both generative image and text models) have shown evidence that AI models will deteriorate if their training data solely consists of AI-generated content—which is where Hive’s classifier comes in handy. The study’s results show that Hive’s model outperforms both its automated peers and highly-trained human experts in differentiating between human-created art versus AI-generated images across most scenarios. This post examines the study’s methodologies and findings, in addition to highlighting our model’s consistent performance across various inputs. Structuring the Study In the experiment, researchers evaluated the performance of five automated detectors (three of which are commercially available, including Hive’s model) and humans against a dataset containing both human-created and AI-generated images across various art styles. Humans were categorized into three subgroups: non-artists, professional artists, and expert artists. Expert artists are the only subgroup with prior experience in identifying AI-generated images. The dataset consists of four different image groups: human-created art, AI-generated images, “hybrid images” which combine generative AI and human effort, and perturbed versions of human-created art. A perturbation is defined as a minor change to the model input aimed at detecting vulnerabilities in the model’s structure. Four perturbation methods are used in the study: JPEG compression, Gaussian noise, CLIP-based Adversarial Perturbation (which performs perturbations at the pixel level), and Glaze (a tool used to protect human artists from mimicry by introducing imperceptible perturbations on the artwork). After evaluating the model on unperturbed imagery, the researchers proceeded to more advanced scenarios with perturbed imagery. Evaluation Methods and Findings The researchers evaluated the automated detectors on four metrics: overall accuracy (ratio of training data classified correctly to the entire dataset), false positive rate (ratio of human-created art misclassified as AI-generated), false negative rate (ratio of AI-generated images misclassified as human-created), and AI detection success rate (ratio of AI-generated images correctly classified as AI-generated to the total amount of AI-generated images). Among automated detectors, Hive’s model emerged as the “clear winner” (Ha and Passananti 2024, 6). Not only does it boast a near-perfect 98.03% accuracy rate, but it also has a 0% false positive rate (i.e., it never misclassifies human art) and a low 3.17% false negative rate (i.e., it rarely misclassifies AI-generated images). According to the authors, this could be attributed to Hive’s rich collection of generative AI datasets, with high quantities of diverse training data compared to its competitors. Additionally, Hive’s model proved to be resistant against most perturbation methods, but faced some challenges classifying AI-generated images processed with Glaze. However, it should be noted that Glaze’s primary purpose is as a protection tool for human artwork. Glazing AI-generated images is a non-traditional use case with minimal training data available as a result. Thus, Hive’s model’s performance with Glazed AI-generated images has little bearing on its overall quality. Final Thoughts Moving Forward When it comes to automated detectors and humans alike, Hive’s model is unparalleled. Even compared to human expert artists, Hive’s model classifies images with higher levels of confidence and accuracy. While the study considers the model’s potential areas for improvement, it is important to note that the study was published in February 2024. In the months following the study’s publication, Hive’s model has vastly improved and continues to expand its capabilities, with 12+ model architectures added since. If you’d like to learn more about Hive’s AI-Generated Image and Video Detection API, a demo of the service can be accessed here, with additional documentation provided here. However, don’t just trust us, test us: reach out to sales@thehive.ai or contact us here, and our team can share API keys and credentials for your new endpoints.
BACK TO ALL BLOGS Organic or Diffused: Can We Distinguish Human Art from AI-generated Images? HiveFebruary 6, 2024July 29, 2025