BACK TO ALL BLOGS

Protecting Children’s Online Safety with Internet Watch Foundation

Contents

Hive is proud to announce that we are partnering with Internet Watch Foundation (IWF), a non-profit organization working to stop child sexual abuse online. We will be integrating their proprietary keyword and URL lists into our default Text Moderation model for all customers at no additional cost.

Our Joint Commitment to Child Safety

Making the internet a safer place is one of Hive’s core values. Our partnership with IWF allows us to use their specialized knowledge to bolster our leading content moderation tools, helping our customers better detect and flag online records of child sexual abuse. 

As part of our partnership, Hive will now include the following two IWF wordlists as part of our default Text Moderation model for all customers at no additional cost:

  1. Keyword List: This wordlist contains known terms and code words that offenders use to exchange child sexual abuse material (CSAM) in a discreet manner. More information can be found here.
  2. URL List: This wordlist contains a comprehensive list of webpages that are confirmed to host CSAM in image or video form. More information can be found here.

With these lists, customers can now use Text Moderation to catch various keywords and URLs associated with CSAM. These lists are dynamic and will be updated on a daily basis.

A sample Text Moderation response can be found below. We recommend that all customers perform an initial evaluation to first determine if the list’s keywords are helpful for their specific use case. For more information, refer to the following documentation.

Integration with Thorn Safer Match

Our partnership also grants us access to IWF’s hash lists. Previously, we partnered with Thorn, allowing customers to integrate their Safer Match hash matching technology for CSAM detection using Hive APIs.

We can now match against IWF’s hash lists with Thorn Safer Match. If you want this feature supported, please reach out to our sales team (sales@thehive.ai).

By combining our leading moderation tools with IWF’s specialized expertise, we hope that we can create a safer internet for children worldwide.

For more details, you can find our recent press release here, as well as our CEO Kevin Guo’s interview with Rashi Shrivastava of Forbes here. If you’re interested in learning more about what we do, please reach out to our sales team or contact us here for further questions.

BACK TO ALL BLOGS

Expanding our Moderation APIs with Hive’s New Vision Language Model

Contents

Hive is thrilled to announce that we’re releasing Moderation 11B Vision Language Model. Fine-tuned on top of Llama 3.2 11B Vision Instruct, Moderation 11B is a new vision language model (VLM) that expands our established suite of text and visual moderation models. Building on our existing capabilities, this new model offers a powerful way to handle flexible and context-dependent moderation scenarios.

An Introduction to VLMs and Moderation 11B

Vision language models (VLMs) are models that can learn from image and text inputs. This ability to simultaneously process inputs across multiple modalities (e.g. images and text) is known as multimodality. While VLMs share similar functions with large language models (LLMs), traditional LLMs cannot process image inputs.

With Moderation 11B VLM, we leverage unique multimodal capabilities to extend our existing moderation tool suite. Beyond its multimodality, Moderation 11B VLM can incorporate additional contextual information, which is not possible with our traditional classifiers. The model’s baked-in knowledge, combined with insights trained from our classifier dataset, enables a more comprehensive approach to moderation.

Moderation 11B VLM is trained on all 53 public heads of our Visual Moderation system, recognizing content across distinct categories such as sexual content, violence, drugs, hate, and more. Because of these enhancements, it becomes a valuable addition to our existing Enterprise moderation classifiers, helping to capture a wide range of flexible and alternative cases that can arise in dynamic workflows.

Potential Use Cases

Moderation 11B VLM applies to a broad range of use cases, notably surpassing Llama 3.2 11B Vision Instruct in identifying contextual violations and handling unseen data in our internal tests. Below are some potential use cases where our model performs well:

  1. Contextual violations: Cases where individual inputs alone may not be flagged as violations, but all inputs contextualized together makes it one. For example, a text message could appear harmless on its own, yet the preceding conversation context reveals it to be a violation.
  2. Multi-modal violations: Situations where both text and image inputs are important. For instance, analyzing a product image alongside its description can uncover violations that single-modality models would miss.
  3. Unseen data: Inputs that the model has not previously encountered. For example, customers may use Moderation 11B VLM to ensure that user content aligns with newly introduced company policies.

Below are graphical representations of how our fine-tuned Moderation 11B model performed in our internal testing compared to the Llama 3.2 11B Vision Instruct model. We assessed their respective F1 scores, a metric that combines both precision and recall. The F1 score was computed using the standard formula: F1 = 2 * (precision * recall) / (precision + recall).

In our internal evaluation, we tasked both our Moderation 11B VLM and Llama 3.2 11B Vision Instruct with learning the classification guidelines outlined in our public Visual Moderation documentation. These guidelines were then used to evaluate a randomly selected sizable sample dataset of images from our proprietary Visual Moderation dataset, which has highly accurate hand-labeled ground truth classifications. This dataset also included diverse and challenging content types from each of our visual moderation heads, such as sexual intent, hate symbols and self harm. While Moderation 11B VLM’s performance demonstrates its ability to generalize well within the scope of these content classes, it is important to note that results may vary depending on the composition of external datasets

Expanding Moderation

With Moderation 11B VLM’s release, we hope to meaningfully and flexibly broaden the range of use cases our moderation tools can handle. We’re excited to see how this model assists with your moderation workflows, especially when navigating complex scenarios. Anyone with a Hive account can access our API playground here to try Moderation 11B VLM directly from the user interface.

Below are two examples of Moderation 11B VLM requests and responses.

For more details, please refer to the documentation here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.

BACK TO ALL BLOGS

Announcing Hive’s Partnership with the Defense Innovation Unit

Contents

Hive is excited to announce that we have been awarded a Department of Defense (DoD) contract for deepfake detection of video, image, and audio content. This groundbreaking partnership marks a significant milestone in protecting our national security from the risks of synthetic media and AI-generated disinformation.

Combating Synthetic Media and Disinformation

Rapid strides in technology have made AI manipulation the weapon of choice for numerous adversarial entities. For the Department of Defense, a digital safeguard is necessary in order to protect the integrity of vital information systems and stay vigilant against the future spread of misinformation, threats, and conflicts at a national scale.

Hive’s reputation as frontline defenders against AI-generated deception makes us uniquely equipped to handle such threats. Not only do we understand the stakes at hand, we have been and continue to be committed to delivering unmatched detection tools that can mitigate these risks with accuracy and speed.

Under our initial two-year contract, Hive will partner with the Defense Innovation Unit (DIU) to support the intelligence community with our state-of-the-art deepfake detection models, deployed in an offline, on-premise environment and capable of detecting AI-generated video, image, and audio content. We are honored to join forces with the Department of Defense in this critical mission.

Our Cutting-Edge Tools

To best empower the U.S. defense forces against potential threats, we have provided five proprietary models that can detect whether an input is AI-generated or a deepfake.

If an input is flagged as AI-generated, it was likely created using a generative AI engine. Whereas, a deepfake is a real image or video where one or more of the faces in the original image has been swapped with another person’s face.

The models we’ve provided are, as follows:

  1. AI-Generated Detection (Image and Video), which detects if an image or video is AI-generated.
  2. AI-Generated Detection (Audio), which detects if an audio clip is AI-generated.
  3. Deepfake Detection (Image), which detects if an image contains one or more faces that are deepfaked.
  4. Deepfake Detection (Video), which detects if a video contains one or more faces that are deepfaked.
  5. Liveness (Image and Video), which detects whether a face in an image or video is primary (exists in the primary image) or secondary (exists in an image, screen, or painting inside of the primary image).

Forging a Path Forward

Even as new threats continue to emerge and escalate, Hive continues to be steadfast in our commitment to provide the world’s most capable AI models for validating the safety and authenticity of digital content.

For more details, you can find our recent press release here and the DIU’s press release here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.

BACK TO ALL BLOGS

Model Explainability With Text Moderation

Contents

Hive is excited to announce that we are releasing a new API: Text Moderation Explanations! This API helps customers understand why our Text Moderation model assigns text strings particular scores.

The Need For Explainability

Hive’s Text Moderation API scans a text-string or message, interprets it, and returns to our users a score from 0-3 mapping to a severity level across a number of top level classes and dozens of languages. Today, hundreds of customers send billions of text strings each month through this API to protect their online communities.

A top feature request has been explanations for why our model assigns the scores it does, especially for foreign languages. While some moderation scores may be clear, there also may be ambiguity around edge cases for why a string was scored the way it was.

This is where our new Text Moderation Explanations API comes in—delivering additional context and visibility into moderation results in a scalable way. With Text Moderation Explanations, human moderators can quickly interpret results and utilize the additional information to take appropriate action.

A Supplement to Our Text Moderation Model

Our Text Moderation classes are ordered by severity, ranging from level 3 (most severe) to level 0 (benign). These classes correspond to the possible scores Text Moderation can give a text string. For example: If a text string falls under the “sexual” head and contains sexually explicit language, it would be given a score of 3.

The Text Moderation Explanations API takes in three inputs: a text string, its class label (either “sexual”, “bullying”, “hate”, or “violence”), and the score it was assigned (either 3, 2, 1, or 0). The output is a text string that explains why the original input text was given that score relative to its class. It should be noted that Explanations is only supported for select multilevel heads (corresponding to the class labels listed previously).

To develop the Explanations model, we used a supervised fine-tuning process. We used labeled data—which we internally labeled at Hive using native speakers—to fine-tune the original model for this specialized process. This process allows us to support a number of languages apart from English.

Comprehensive Language Support

We have built our Text Moderation Explanation API with broad initial language support. Language support solves the crucial issue of understanding why a text string (in one’s non-native language) was scored a certain way.

We currently support eight different languages for Text Moderation Explanations and four top level classes:

Text Moderation Explanations are now included at no additional cost as part of our Moderation Dashboard product, as shown below:

Additionally, customers can also access the Text Moderation Explanations model through an API (refer to the documentation).

In future releases, we anticipate adding further language and top level class support. If you’re interested in learning more or gaining test access to the Text Moderation Explanations model, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.

BACK TO ALL BLOGS

Expanding Our CSAM Detection API

Contents

We are excited to announce that Hive is now offering Thorn’s predictive technology through our CSAM detection API! This API now enables customers to identify novel cases of child sexual abuse material (CSAM) in addition to detecting known CSAM using hash-based matching.

Our Commitment to Child Internet Safety

At Hive, making the internet safer is core to our mission. While our content moderation tools help reduce human exposure to harmful content across many categories, addressing CSAM requires specialized expertise and technology.

That’s why we’re expanding our existing partnership with Thorn, an innovative nonprofit that builds technology to defend children from sexual abuse and exploitation in the digital age.

Until now, our integration with Thorn focused on hash-matching technology to detect known CSAM. The new CSAM detection API builds on this foundation by adding advanced machine learning capabilities that can identify previously unidentified CSAM.

By combining Thorn’s industry-leading CSAM detection technology with Hive’s comprehensive content moderation suite, we provide platforms with robust protection against both known and newly created CSAM.

How the Classifier Works

The classifier works by first generating embeddings of the uploaded media. An embedding is a list of computer-generated scores between 0 and 1. After generating the embeddings, Hive permanently deletes all of the original media. We then use the classifier to determine whether the content is CSAM based on the embeddings. This process ensures that we do not retain any CSAM on our servers. 

The classifier returns a score between 0 and 1 that predicts whether a video or image is CSAM. The response object will have the same general structure for both image and video inputs. Please note that Hive will return both results together: probability scores from the classifier and any match results from hash matching against the aggregated hash database.

For a detailed guide on how to use Hive’s CSAM detection API, refer to the documentation.

Building a Safer Internet

Protecting platforms from CSAM demands scalable solutions. The problem is complex; but our integration with Thorn’s advanced technology provides an efficient way to detect and stop CSAM, helping to safeguard children and build a safer internet for all.

If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.

BACK TO ALL BLOGS

Announcing General Availability of Hive Models

Contents

We are excited to announce that we are making select proprietary Hive models and popular open-source generative models directly accessible for customers to deploy and integrate into their workflows.

Starting today, customers can now create projects by themselves with just a few clicks.

Hive Proprietary Models

We have made select proprietary Hive models accessible to customers across our Understand and Search model categories, ranging from our Celebrity Recognition API to our Speech-to-Text model. For a full list of generally available models, see our pricing page here.

Additional Model Offerings

We currently offer a variety of open-source image generation models and large language models (LLMs) that customers can directly access themselves.

For image generation models, we have four different options available today, with additional models being served in the coming weeks: SDXL (Stable Diffusion XL), SDXL Enhanced, Flux Schnell, and Flux Schnell Enhanced. SDXL Enhanced and Flux Schnell Enhanced are Hive’s enhanced versions of the aforementioned base models, served exclusively to our customers. The differences are outlined in the table below.

SDXL (Stable Diffusion XL)Latent diffusion text-to-image generation model produced by Stability AI. Trained on a larger dataset than the base model, with a larger UNet enabling better generation.
SDXL EnhancedHive’s enhanced version of SDXL, served exclusively to our customers. Tailored toward a photorealistic and refined art style with extreme detail.
Flux SchnellFlux’s fastest model in their suite of text-to-image models, capable of generating images in 4 or fewer steps. Best suited for local development and personal use.


Flux Schnell Enhanced
Hive’s enhanced version of Flux Schnell that is trained on our proprietary data and retains the base model’s speed and efficiency, served exclusively to our customers. Generates images across a wide range of artistic styles with a specialization in photorealism, leading to high levels of customer satisfaction based on past user studies.

For LLMs, we have a selection of Meta’s Llama models from their Llama 3.1 and 3.2 series available now. The differences are outlined in the table below.



Llama 3.1 8B Instruct
Llama 3.1 8B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (70B).


Llama 3.1 70B Instruct
Llama 3.1 70B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (8B).


Llama 3.2 1B Instruct
Llama 3.2 1B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (3B).


Llama 3.2 3B Instruct
Llama 3.2 3B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (1B).

We plan to make more models available for direct use in the coming months.

How to Create a Project

Creating new projects has never been easier. To get started, go to thehive.ai and click on the “Go to Dashboard” button in the top-right corner.

Home Page

If you are not logged in, the “Go to Dashboard” button will redirect you to the sign in page. Then, either sign in to an existing account or click the blue “Sign up” hyperlink at the bottom of the page to sign up for a new account.

Sign In Page

You will receive an email to verify your account after signing up. After you’ve either logged into an existing account or verified your new account, you will be redirected to the main dashboard.

For new accounts, a new organization named “(User Name)’s personal organization” will be automatically created. Your current organization will be visible in the top-right corner. Before you can submit tasks, you will need to accept the Terms of Use and add credits to your account. To accept the Terms of Use, click the “View Terms and Conditions” button at the bottom of the page. You will need to do this for every additional organization you create.

Main Dashboard

To add funds to your credit balance, locate the “Billing” section in the bottom-left corner of the dashboard and click the blue “Add Credit” button, which will redirect you to another page where you can add a payment method.

Billing
Add Payment Method

Now you’re ready to create your own projects. On any page, click on the “Products” tab on the left side of the header. From the dropdown menu that appears, select “Models.” It will redirect you to the following page, where you can view all of your current projects.

To create a new project, click on the plus (+) sign next to “Projects” on the top-left side of the screen. You will be redirected to the following page, where you can choose your project type. Select “Hive Models.”

Project Types

Then, you will be redirected to another page containing our available models. Click to select the desired model for your project.

Project Format

After selecting your desired model, you will need to configure your project. Change your project’s name using the text box below. Once you hit the blue “Create” button, your project will be live.

Project Configure

Upon project creation, you will be redirected to the following interface. Here, you can view your API key by clicking the “API Keys” Button on the top right.

Project Interface

Using this API key, you can call the API by making a cURL request in your terminal. To interpret the results, please refer to our documentation and look up the relevant model and its class definitions.

Sample cURL Request and Result

For pricing details, please reference our model pricing table here. If you run into any issues building your projects, please feel free to reach out to us at support@thehive.ai and we will be happy to help. If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.