BACK TO ALL BLOGS Announcing Hive’s Partnership with the Defense Innovation Unit HiveDecember 5, 2024February 21, 2025 Contents Combating Synthetic Media and DisinformationOur Cutting-Edge ToolsForging a Path Forward Hive is excited to announce that we have been awarded a Department of Defense (DoD) contract for deepfake detection of video, image, and audio content. This groundbreaking partnership marks a significant milestone in protecting our national security from the risks of synthetic media and AI-generated disinformation. Combating Synthetic Media and Disinformation Rapid strides in technology have made AI manipulation the weapon of choice for numerous adversarial entities. For the Department of Defense, a digital safeguard is necessary in order to protect the integrity of vital information systems and stay vigilant against the future spread of misinformation, threats, and conflicts at a national scale. Hive’s reputation as frontline defenders against AI-generated deception makes us uniquely equipped to handle such threats. Not only do we understand the stakes at hand, we have been and continue to be committed to delivering unmatched detection tools that can mitigate these risks with accuracy and speed. Under our initial two-year contract, Hive will partner with the Defense Innovation Unit (DIU) to support the intelligence community with our state-of-the-art deepfake detection models, deployed in an offline, on-premise environment and capable of detecting AI-generated video, image, and audio content. We are honored to join forces with the Department of Defense in this critical mission. Our Cutting-Edge Tools To best empower the U.S. defense forces against potential threats, we have provided five proprietary models that can detect whether an input is AI-generated or a deepfake. If an input is flagged as AI-generated, it was likely created using a generative AI engine. Whereas, a deepfake is a real image or video where one or more of the faces in the original image has been swapped with another person’s face. The models we’ve provided are, as follows: AI-Generated Detection (Image and Video), which detects if an image or video is AI-generated.AI-Generated Detection (Audio), which detects if an audio clip is AI-generated.Deepfake Detection (Image), which detects if an image contains one or more faces that are deepfaked.Deepfake Detection (Video), which detects if a video contains one or more faces that are deepfaked.Liveness (Image and Video), which detects whether a face in an image or video is primary (exists in the primary image) or secondary (exists in an image, screen, or painting inside of the primary image). Forging a Path Forward Even as new threats continue to emerge and escalate, Hive continues to be steadfast in our commitment to provide the world’s most capable AI models for validating the safety and authenticity of digital content. For more details, you can find our recent press release here and the DIU’s press release here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
BACK TO ALL BLOGS Model Explainability With Text Moderation HiveDecember 2, 2024February 21, 2025 Contents The Need For ExplainabilityA Supplement to Our Text Moderation ModelComprehensive Language Support Hive is excited to announce that we are releasing a new API: Text Moderation Explanations! This API helps customers understand why our Text Moderation model assigns text strings particular scores. The Need For Explainability Hive’s Text Moderation API scans a text-string or message, interprets it, and returns to our users a score from 0-3 mapping to a severity level across a number of top level classes and dozens of languages. Today, hundreds of customers send billions of text strings each month through this API to protect their online communities. A top feature request has been explanations for why our model assigns the scores it does, especially for foreign languages. While some moderation scores may be clear, there also may be ambiguity around edge cases for why a string was scored the way it was. This is where our new Text Moderation Explanations API comes in—delivering additional context and visibility into moderation results in a scalable way. With Text Moderation Explanations, human moderators can quickly interpret results and utilize the additional information to take appropriate action. A Supplement to Our Text Moderation Model Our Text Moderation classes are ordered by severity, ranging from level 3 (most severe) to level 0 (benign). These classes correspond to the possible scores Text Moderation can give a text string. For example: If a text string falls under the “sexual” head and contains sexually explicit language, it would be given a score of 3. The Text Moderation Explanations API takes in three inputs: a text string, its class label (either “sexual”, “bullying”, “hate”, or “violence”), and the score it was assigned (either 3, 2, 1, or 0). The output is a text string that explains why the original input text was given that score relative to its class. It should be noted that Explanations is only supported for select multilevel heads (corresponding to the class labels listed previously). To develop the Explanations model, we used a supervised fine-tuning process. We used labeled data—which we internally labeled at Hive using native speakers—to fine-tune the original model for this specialized process. This process allows us to support a number of languages apart from English. Comprehensive Language Support We have built our Text Moderation Explanation API with broad initial language support. Language support solves the crucial issue of understanding why a text string (in one’s non-native language) was scored a certain way. We currently support eight different languages for Text Moderation Explanations and four top level classes: Text Moderation Explanations are now included at no additional cost as part of our Moderation Dashboard product, as shown below: Additionally, customers can also access the Text Moderation Explanations model through an API (refer to the documentation). In future releases, we anticipate adding further language and top level class support. If you’re interested in learning more or gaining test access to the Text Moderation Explanations model, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
BACK TO ALL BLOGS Expanding Our CSAM Detection API HiveNovember 21, 2024February 21, 2025 Contents Our Commitment to Child Internet SafetyHow the Classifier WorksBuilding a Safer Internet We are excited to announce that Hive is now offering Thorn’s predictive technology through our CSAM detection API! This API now enables customers to identify novel cases of child sexual abuse material (CSAM) in addition to detecting known CSAM using hash-based matching. Our Commitment to Child Internet Safety At Hive, making the internet safer is core to our mission. While our content moderation tools help reduce human exposure to harmful content across many categories, addressing CSAM requires specialized expertise and technology. That’s why we’re expanding our existing partnership with Thorn, an innovative nonprofit that builds technology to defend children from sexual abuse and exploitation in the digital age. Until now, our integration with Thorn focused on hash-matching technology to detect known CSAM. The new CSAM detection API builds on this foundation by adding advanced machine learning capabilities that can identify previously unidentified CSAM. By combining Thorn’s industry-leading CSAM detection technology with Hive’s comprehensive content moderation suite, we provide platforms with robust protection against both known and newly created CSAM. How the Classifier Works The classifier works by first generating embeddings of the uploaded media. An embedding is a list of computer-generated scores between 0 and 1. After generating the embeddings, Hive permanently deletes all of the original media. We then use the classifier to determine whether the content is CSAM based on the embeddings. This process ensures that we do not retain any CSAM on our servers. The classifier returns a score between 0 and 1 that predicts whether a video or image is CSAM. The response object will have the same general structure for both image and video inputs. Please note that Hive will return both results together: probability scores from the classifier and any match results from hash matching against the aggregated hash database. For a detailed guide on how to use Hive’s CSAM detection API, refer to the documentation. Building a Safer Internet Protecting platforms from CSAM demands scalable solutions. The problem is complex; but our integration with Thorn’s advanced technology provides an efficient way to detect and stop CSAM, helping to safeguard children and build a safer internet for all. If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS Announcing General Availability of Hive Models HiveOctober 4, 2024February 21, 2025 Contents Hive Proprietary ModelsAdditional Model OfferingsHow to Create a Project We are excited to announce that we are making select proprietary Hive models and popular open-source generative models directly accessible for customers to deploy and integrate into their workflows. Starting today, customers can now create projects by themselves with just a few clicks. Hive Proprietary Models We have made select proprietary Hive models accessible to customers across our Understand and Search model categories, ranging from our Celebrity Recognition API to our Speech-to-Text model. For a full list of generally available models, see our pricing page here. Additional Model Offerings We currently offer a variety of open-source image generation models and large language models (LLMs) that customers can directly access themselves. For image generation models, we have four different options available today, with additional models being served in the coming weeks: SDXL (Stable Diffusion XL), SDXL Enhanced, Flux Schnell, and Flux Schnell Enhanced. SDXL Enhanced and Flux Schnell Enhanced are Hive’s enhanced versions of the aforementioned base models, served exclusively to our customers. The differences are outlined in the table below. SDXL (Stable Diffusion XL)Latent diffusion text-to-image generation model produced by Stability AI. Trained on a larger dataset than the base model, with a larger UNet enabling better generation.SDXL EnhancedHive’s enhanced version of SDXL, served exclusively to our customers. Tailored toward a photorealistic and refined art style with extreme detail.Flux SchnellFlux’s fastest model in their suite of text-to-image models, capable of generating images in 4 or fewer steps. Best suited for local development and personal use.Flux Schnell EnhancedHive’s enhanced version of Flux Schnell that is trained on our proprietary data and retains the base model’s speed and efficiency, served exclusively to our customers. Generates images across a wide range of artistic styles with a specialization in photorealism, leading to high levels of customer satisfaction based on past user studies. For LLMs, we have a selection of Meta’s Llama models from their Llama 3.1 and 3.2 series available now. The differences are outlined in the table below. Llama 3.1 8B InstructLlama 3.1 8B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (70B).Llama 3.1 70B InstructLlama 3.1 70B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (8B).Llama 3.2 1B InstructLlama 3.2 1B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (3B).Llama 3.2 3B InstructLlama 3.2 3B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (1B). We plan to make more models available for direct use in the coming months. How to Create a Project Creating new projects has never been easier. To get started, go to thehive.ai and click on the “Go to Dashboard” button in the top-right corner. Home Page If you are not logged in, the “Go to Dashboard” button will redirect you to the sign in page. Then, either sign in to an existing account or click the blue “Sign up” hyperlink at the bottom of the page to sign up for a new account. Sign In Page You will receive an email to verify your account after signing up. After you’ve either logged into an existing account or verified your new account, you will be redirected to the main dashboard. For new accounts, a new organization named “(User Name)’s personal organization” will be automatically created. Your current organization will be visible in the top-right corner. Before you can submit tasks, you will need to accept the Terms of Use and add credits to your account. To accept the Terms of Use, click the “View Terms and Conditions” button at the bottom of the page. You will need to do this for every additional organization you create. Main Dashboard To add funds to your credit balance, locate the “Billing” section in the bottom-left corner of the dashboard and click the blue “Add Credit” button, which will redirect you to another page where you can add a payment method. Billing Add Payment Method Now you’re ready to create your own projects. On any page, click on the “Products” tab on the left side of the header. From the dropdown menu that appears, select “Models.” It will redirect you to the following page, where you can view all of your current projects. To create a new project, click on the plus (+) sign next to “Projects” on the top-left side of the screen. You will be redirected to the following page, where you can choose your project type. Select “Hive Models.” Project Types Then, you will be redirected to another page containing our available models. Click to select the desired model for your project. Project Format After selecting your desired model, you will need to configure your project. Change your project’s name using the text box below. Once you hit the blue “Create” button, your project will be live. Project Configure Upon project creation, you will be redirected to the following interface. Here, you can view your API key by clicking the “API Keys” Button on the top right. Project Interface Using this API key, you can call the API by making a cURL request in your terminal. To interpret the results, please refer to our documentation and look up the relevant model and its class definitions. Sample cURL Request and Result For pricing details, please reference our model pricing table here. If you run into any issues building your projects, please feel free to reach out to us at support@thehive.ai and we will be happy to help. If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS Announcing Hive’s Integration with NVIDIA NIM Hive to Accelerate AI Adoption in Private Clouds and On-Prem Environments Using NVIDIA NIM HiveSeptember 23, 2024March 3, 2025 Contents Secure and Accelerated Deployments with NIMHow Customers Use Our Leading AI Detection ModelsManaging the Risks of Generative AI Hive is excited to announce the groundbreaking integration of our proprietary AI models with NVIDIA NIM. Our collaboration will allow, for the first time, Hive customers to deploy our industry-leading AI models in private clouds and on-premises data centers. We are also announcing that for the remainder of 2024, internet social platforms can receive up to 90 days of free trial access to our models. To learn more, check out the press release here. The first Hive models to be made available with NVIDIA NIM are our AI-generated content detection models, which allow customers to identify AI-generated images, video, and audio. However, we plan to make additional models available through NVIDIA NIM in the coming months, including content moderation, logo detection, optical character recognition, speech transcription, custom models through Hive’s AutoML platform, and more. Secure and Accelerated Deployments with NIM Short for NVIDIA Inference Microservices, NIM provides models as optimized containers to prospective customers. This enables organizations to run AI models on NVIDIA GPUs on private clouds, workstations, and on-premises. NVIDIA NIM is part of the NVIDIA AI Enterprise software platform and connects the power of the Hive’s proprietary AI models, securely deployed on NVIDIA’s accelerated infrastructure, with enterprise customers everywhere. While Hive’s cloud-based APIs process billions of customer requests every month, among prospective customers’ top requests has been the ability to deploy Hive models in private clouds or on-premises. These are often enterprises whose strict data governance standards challenge the use of our cloud-based APIs. Our integration with NIM solves this challenge. How Customers Use Our Leading AI Detection Models Our AI-detection tools—the first Hive models to be made available with NVIDIA NIM—have been widely recognized as best-in-class, including by an independent research study from the University of Chicago. The researchers found that Hive’s model was the “clear winner” against both its automated competitors and highly-trained human experts in classifying images as either AI-generated or human-created. With generative AI on the rise, Hive’s AI detection models have become crucial in combating the technology’s misuse. Here are select ways that customers use our models to protect themselves from the potential misuse of AI-generated and synthetic content. Internet social platforms leverage our AI detection models to proactively screen content for the presence of AI-enabled misinformation in real time. Digital platforms can leverage our detections to provide transparency to their users by tagging content as AI-generated, or moderate potential misinformation by implementing sitewide bans. Insurance companies use our models to automate the process of identifying AI-enabled fraud in evidence submitted with insurance claims. By scanning claims evidence for AI-generated augmentations, insurers can quickly, confidently and securely weed out fraud, saving significant cost from paying out fraudulent claims. Banks, brokers, and other financial institutions use our AI-generated content detection models to secure their user identification verification and KYC processes, leveraging Hive’s industry-leading AI-generated audio detection model to verify voice recognition workflows and prevent sophisticated financial fraud. Digital marketplaces use our models to automate the detection and moderation of fraudulent listings. Moreover, marketplaces protect their customers’ experience by verifying that both users and their product reviews are authentic. Video conferencing and live streaming platforms integrate our AI detection models to authenticate video and audio in real time, preventing both impersonation and the misuse of likenesses. While not all-encompassing, these are select ways that customers use our models today. Managing the Risks of Generative AI The increasing accessibility of Generative AI tools poses a newfound set of risks to companies and organizations. It can be difficult to moderate the proliferation of AI-generated content in a scalable, automated and secure way. We are proud to provide a solution that supports our customers in managing these risks, now made more accessible for enterprises to deploy on-premises or in private clouds with NVIDIA NIM. If you’re interested in accessing Hive’s AI models through NVIDIA NIM, you can learn more on our website here or on NVIDIA’s website here. If you have any questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS “Clear Winner”: Study Shows Hive’s AI-Generated Image Detection API is Best-in-Class HiveSeptember 10, 2024February 21, 2025 Contents Navigating an Increasingly Generative WorldStructuring the StudyEvaluation Methods and FindingsFinal Thoughts Moving Forward Navigating an Increasingly Generative World To the untrained eye, distinguishing human-created art from AI-generated content can be difficult. Hive’s commitment to providing customers with API-accessible solutions for challenging problems led to the creation of our AI-Generated Image and Video Detection API, which classifies images as human-created or AI-generated. Our model was evaluated in an independent study conducted by Anna Yoo Jeong Ha and Josephine Passananti from the University of Chicago, which sought to determine who was more effective at classifying images as AI-generated: humans or automated detectors. Ha and Passananti’s study addresses a growing problem within the generative AI space: As generative AI models become more advanced, the boundary between human-created art and AI-generated images has become increasingly indistinguishable. With such powerful tools being accessible to the general public, various legal and ethical concerns have been raised regarding the misuse of said technology. Such concerns are pertinent to address because the misuse of generative AI models negatively impacts both society at large and the AI models themselves. Bad actors have used AI-generated images for harmful purposes, such as spreading misinformation, committing fraud, or scamming individuals and organizations. As only human-created art is eligible for copyright, businesses may attempt to bypass the law by passing off AI-generated images as human-created. Moreover, multiple studies (on both generative image and text models) have shown evidence that AI models will deteriorate if their training data solely consists of AI-generated content—which is where Hive’s classifier comes in handy. The study’s results show that Hive’s model outperforms both its automated peers and highly-trained human experts in differentiating between human-created art versus AI-generated images across most scenarios. This post examines the study’s methodologies and findings, in addition to highlighting our model’s consistent performance across various inputs. Structuring the Study In the experiment, researchers evaluated the performance of five automated detectors (three of which are commercially available, including Hive’s model) and humans against a dataset containing both human-created and AI-generated images across various art styles. Humans were categorized into three subgroups: non-artists, professional artists, and expert artists. Expert artists are the only subgroup with prior experience in identifying AI-generated images. The dataset consists of four different image groups: human-created art, AI-generated images, “hybrid images” which combine generative AI and human effort, and perturbed versions of human-created art. A perturbation is defined as a minor change to the model input aimed at detecting vulnerabilities in the model’s structure. Four perturbation methods are used in the study: JPEG compression, Gaussian noise, CLIP-based Adversarial Perturbation (which performs perturbations at the pixel level), and Glaze (a tool used to protect human artists from mimicry by introducing imperceptible perturbations on the artwork). After evaluating the model on unperturbed imagery, the researchers proceeded to more advanced scenarios with perturbed imagery. Evaluation Methods and Findings The researchers evaluated the automated detectors on four metrics: overall accuracy (ratio of training data classified correctly to the entire dataset), false positive rate (ratio of human-created art misclassified as AI-generated), false negative rate (ratio of AI-generated images misclassified as human-created), and AI detection success rate (ratio of AI-generated images correctly classified as AI-generated to the total amount of AI-generated images). Among automated detectors, Hive’s model emerged as the “clear winner” (Ha and Passananti 2024, 6). Not only does it boast a near-perfect 98.03% accuracy rate, but it also has a 0% false positive rate (i.e., it never misclassifies human art) and a low 3.17% false negative rate (i.e., it rarely misclassifies AI-generated images). According to the authors, this could be attributed to Hive’s rich collection of generative AI datasets, with high quantities of diverse training data compared to its competitors. Additionally, Hive’s model proved to be resistant against most perturbation methods, but faced some challenges classifying AI-generated images processed with Glaze. However, it should be noted that Glaze’s primary purpose is as a protection tool for human artwork. Glazing AI-generated images is a non-traditional use case with minimal training data available as a result. Thus, Hive’s model’s performance with Glazed AI-generated images has little bearing on its overall quality. Final Thoughts Moving Forward When it comes to automated detectors and humans alike, Hive’s model is unparalleled. Even compared to human expert artists, Hive’s model classifies images with higher levels of confidence and accuracy. While the study considers the model’s potential areas for improvement, it is important to note that the study was published in February 2024. In the months following the study’s publication, Hive’s model has vastly improved and continues to expand its capabilities, with 12+ model architectures added since. If you’d like to learn more about Hive’s AI-Generated Image and Video Detection API, a demo of the service can be accessed here, with additional documentation provided here. However, don’t just trust us, test us: reach out to sales@thehive.ai or contact us here, and our team can share API keys and credentials for your new endpoints.