BACK TO ALL BLOGS

Announcing Hive’s Partnership with the Defense Innovation Unit

Contents

Hive is excited to announce that we have been awarded a Department of Defense (DoD) contract for deepfake detection of video, image, and audio content. This groundbreaking partnership marks a significant milestone in protecting our national security from the risks of synthetic media and AI-generated disinformation.

Combating Synthetic Media and Disinformation

Rapid strides in technology have made AI manipulation the weapon of choice for numerous adversarial entities. For the Department of Defense, a digital safeguard is necessary in order to protect the integrity of vital information systems and stay vigilant against the future spread of misinformation, threats, and conflicts at a national scale.

Hive’s reputation as frontline defenders against AI-generated deception makes us uniquely equipped to handle such threats. Not only do we understand the stakes at hand, we have been and continue to be committed to delivering unmatched detection tools that can mitigate these risks with accuracy and speed.

Under our initial two-year contract, Hive will partner with the Defense Innovation Unit (DIU) to support the intelligence community with our state-of-the-art deepfake detection models, deployed in an offline, on-premise environment and capable of detecting AI-generated video, image, and audio content. We are honored to join forces with the Department of Defense in this critical mission.

Our Cutting-Edge Tools

To best empower the U.S. defense forces against potential threats, we have provided five proprietary models that can detect whether an input is AI-generated or a deepfake.

If an input is flagged as AI-generated, it was likely created using a generative AI engine. Whereas, a deepfake is a real image or video where one or more of the faces in the original image has been swapped with another person’s face.

The models we’ve provided are, as follows:

  1. AI-Generated Detection (Image and Video), which detects if an image or video is AI-generated.
  2. AI-Generated Detection (Audio), which detects if an audio clip is AI-generated.
  3. Deepfake Detection (Image), which detects if an image contains one or more faces that are deepfaked.
  4. Deepfake Detection (Video), which detects if a video contains one or more faces that are deepfaked.
  5. Liveness (Image and Video), which detects whether a face in an image or video is primary (exists in the primary image) or secondary (exists in an image, screen, or painting inside of the primary image).

Forging a Path Forward

Even as new threats continue to emerge and escalate, Hive continues to be steadfast in our commitment to provide the world’s most capable AI models for validating the safety and authenticity of digital content.

For more details, you can find our recent press release here and the DIU’s press release here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.

BACK TO ALL BLOGS

Model Explainability With Text Moderation

Contents

Hive is excited to announce that we are releasing a new API: Text Moderation Explanations! This API helps customers understand why our Text Moderation model assigns text strings particular scores.

The Need For Explainability

Hive’s Text Moderation API scans a text-string or message, interprets it, and returns to our users a score from 0-3 mapping to a severity level across a number of top level classes and dozens of languages. Today, hundreds of customers send billions of text strings each month through this API to protect their online communities.

A top feature request has been explanations for why our model assigns the scores it does, especially for foreign languages. While some moderation scores may be clear, there also may be ambiguity around edge cases for why a string was scored the way it was.

This is where our new Text Moderation Explanations API comes in—delivering additional context and visibility into moderation results in a scalable way. With Text Moderation Explanations, human moderators can quickly interpret results and utilize the additional information to take appropriate action.

A Supplement to Our Text Moderation Model

Our Text Moderation classes are ordered by severity, ranging from level 3 (most severe) to level 0 (benign). These classes correspond to the possible scores Text Moderation can give a text string. For example: If a text string falls under the “sexual” head and contains sexually explicit language, it would be given a score of 3.

The Text Moderation Explanations API takes in three inputs: a text string, its class label (either “sexual”, “bullying”, “hate”, or “violence”), and the score it was assigned (either 3, 2, 1, or 0). The output is a text string that explains why the original input text was given that score relative to its class. It should be noted that Explanations is only supported for select multilevel heads (corresponding to the class labels listed previously).

To develop the Explanations model, we used a supervised fine-tuning process. We used labeled data—which we internally labeled at Hive using native speakers—to fine-tune the original model for this specialized process. This process allows us to support a number of languages apart from English.

Comprehensive Language Support

We have built our Text Moderation Explanation API with broad initial language support. Language support solves the crucial issue of understanding why a text string (in one’s non-native language) was scored a certain way.

We currently support eight different languages for Text Moderation Explanations and four top level classes:

Text Moderation Explanations are now included at no additional cost as part of our Moderation Dashboard product, as shown below:

Additionally, customers can also access the Text Moderation Explanations model through an API (refer to the documentation).

In future releases, we anticipate adding further language and top level class support. If you’re interested in learning more or gaining test access to the Text Moderation Explanations model, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.

BACK TO ALL BLOGS

Expanding Our CSAM Detection API

Contents

We are excited to announce that Hive is now offering Thorn’s predictive technology through our CSAM detection API! This API now enables customers to identify novel cases of child sexual abuse material (CSAM) in addition to detecting known CSAM using hash-based matching.

Our Commitment to Child Internet Safety

At Hive, making the internet safer is core to our mission. While our content moderation tools help reduce human exposure to harmful content across many categories, addressing CSAM requires specialized expertise and technology.

That’s why we’re expanding our existing partnership with Thorn, an innovative nonprofit that builds technology to defend children from sexual abuse and exploitation in the digital age.

Until now, our integration with Thorn focused on hash-matching technology to detect known CSAM. The new CSAM detection API builds on this foundation by adding advanced machine learning capabilities that can identify previously unidentified CSAM.

By combining Thorn’s industry-leading CSAM detection technology with Hive’s comprehensive content moderation suite, we provide platforms with robust protection against both known and newly created CSAM.

How the Classifier Works

The classifier works by first generating embeddings of the uploaded media. An embedding is a list of computer-generated scores between 0 and 1. After generating the embeddings, Hive permanently deletes all of the original media. We then use the classifier to determine whether the content is CSAM based on the embeddings. This process ensures that we do not retain any CSAM on our servers. 

The classifier returns a score between 0 and 1 that predicts whether a video or image is CSAM. The response object will have the same general structure for both image and video inputs. Please note that Hive will return both results together: probability scores from the classifier and any match results from hash matching against the aggregated hash database.

For a detailed guide on how to use Hive’s CSAM detection API, refer to the documentation.

Building a Safer Internet

Protecting platforms from CSAM demands scalable solutions. The problem is complex; but our integration with Thorn’s advanced technology provides an efficient way to detect and stop CSAM, helping to safeguard children and build a safer internet for all.

If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.

BACK TO ALL BLOGS

“Clear Winner”: Study Shows Hive’s AI-Generated Image Detection API is Best-in-Class

Contents

Navigating an Increasingly Generative World

To the untrained eye, distinguishing human-created art from AI-generated content can be difficult. Hive’s commitment to providing customers with API-accessible solutions for challenging problems led to the creation of our AI-Generated Image and Video Detection API, which classifies images as human-created or AI-generated. Our model was evaluated in an independent study conducted by Anna Yoo Jeong Ha and Josephine Passananti from the University of Chicago, which sought to determine who was more effective at classifying images as AI-generated: humans or automated detectors.

Ha and Passananti’s study addresses a growing problem within the generative AI space: As generative AI models become more advanced, the boundary between human-created art and AI-generated images has become increasingly indistinguishable. With such powerful tools being accessible to the general public, various legal and ethical concerns have been raised regarding the misuse of said technology.

Such concerns are pertinent to address because the misuse of generative AI models negatively impacts both society at large and the AI models themselves. Bad actors have used AI-generated images for harmful purposes, such as spreading misinformation, committing fraud, or scamming individuals and organizations. As only human-created art is eligible for copyright, businesses may attempt to bypass the law by passing off AI-generated images as human-created. Moreover, multiple studies (on both generative image and text models) have shown evidence that AI models will deteriorate if their training data solely consists of AI-generated content—which is where Hive’s classifier comes in handy.

The study’s results show that Hive’s model outperforms both its automated peers and highly-trained human experts in differentiating between human-created art versus AI-generated images across most scenarios. This post examines the study’s methodologies and findings, in addition to highlighting our model’s consistent performance across various inputs.

Structuring the Study

In the experiment, researchers evaluated the performance of five automated detectors (three of which are commercially available, including Hive’s model) and humans against a dataset containing both human-created and AI-generated images across various art styles. Humans were categorized into three subgroups: non-artists, professional artists, and expert artists. Expert artists are the only subgroup with prior experience in identifying AI-generated images.

The dataset consists of four different image groups: human-created art, AI-generated images, “hybrid images” which combine generative AI and human effort, and perturbed versions of human-created art. A perturbation is defined as a minor change to the model input aimed at detecting vulnerabilities in the model’s structure. Four perturbation methods are used in the study: JPEG compression, Gaussian noise, CLIP-based Adversarial Perturbation (which performs perturbations at the pixel level), and Glaze (a tool used to protect human artists from mimicry by introducing imperceptible perturbations on the artwork).

After evaluating the model on unperturbed imagery, the researchers proceeded to more advanced scenarios with perturbed imagery.

Evaluation Methods and Findings

The researchers evaluated the automated detectors on four metrics: overall accuracy (ratio of training data classified correctly to the entire dataset), false positive rate (ratio of human-created art misclassified as AI-generated), false negative rate (ratio of AI-generated images misclassified as human-created), and AI detection success rate (ratio of AI-generated images correctly classified as AI-generated to the total amount of AI-generated images).

Among automated detectors, Hive’s model emerged as the “clear winner” (Ha and Passananti 2024, 6). Not only does it boast a near-perfect 98.03% accuracy rate, but it also has a 0% false positive rate (i.e., it never misclassifies human art) and a low 3.17% false negative rate (i.e., it rarely misclassifies AI-generated images). According to the authors, this could be attributed to Hive’s rich collection of generative AI datasets, with high quantities of diverse training data compared to its competitors.

Additionally, Hive’s model proved to be resistant against most perturbation methods, but faced some challenges classifying AI-generated images processed with Glaze. However, it should be noted that Glaze’s primary purpose is as a protection tool for human artwork. Glazing AI-generated images is a non-traditional use case with minimal training data available as a result. Thus, Hive’s model’s performance with Glazed AI-generated images has little bearing on its overall quality.

Final Thoughts Moving Forward

When it comes to automated detectors and humans alike, Hive’s model is unparalleled. Even compared to human expert artists, Hive’s model classifies images with higher levels of confidence and accuracy.

While the study considers the model’s potential areas for improvement, it is important to note that the study was published in February 2024. In the months following the study’s publication, Hive’s model has vastly improved and continues to expand its capabilities, with 12+ model architectures added since.

If you’d like to learn more about Hive’s AI-Generated Image and Video Detection API, a demo of the service can be accessed here, with additional documentation provided here. However, don’t just trust us, test us: reach out to sales@thehive.ai or contact us here, and our team can share API keys and credentials for your new endpoints.

BACK TO ALL BLOGS

Matching Against CSAM: Hive’s Innovative Integration with Thorn’s Safer Match

Hive's Innovative Integration with Thorn's Safer Match

An image of the Hive and Thorn logos side by side

We are excited to announce that Hive’s Partnership with Thorn is now live! Our current and prospective customers can now easily integrate Thorn’s Safer Match, a CSAM (child sexual abuse material) detection solution, using Hive’s APIs.

The Danger of CSAM

The threat of CSAM involves the production, distribution, and possession of explicit images and videos depicting minors. Every platform with an upload button or messaging capabilities is at risk of hosting child sexual abuse material (CSAM). In fact, in 2023 alone, there were over 104 million reports of potential CSAM reported to the National Center of Missing and Exploited Children.

The current state-of-the-art approach is to use an encrypting function to “hash” the content and then “match” it against a database aggregating 57+ million verified CSAM hashes. If the content hash matches against the database, then the content can be flagged as CSAM.

How the Integration Works 

When presented with visual content, we first hash it, then match it against known instances of CSAM.

  1. Hashing: We take the submitted image or video, and convert it into one or more hashes.
  2. Deletion: We then immediately delete the submitted content ensuring nothing stays on Hive’s servers.
  3. Matching: We match the hashes against the CSAM database and return whether the hashes match or not to you.

Hive’s partnership with Thorn allows our customers to easily incorporate Thorn’s Safer Match into their detection toolset. Safer Match provides programmatic identification of known CSAM with cryptographic and perceptual hash matching for images and for videos, through proprietary scene-sensitive video hashing (SSVH).

How you can use this API today:

First, talk to your Hive sales rep, and get an API key and credentials for your new endpoint.

Image

For an image, simply send the image to us, and we will hash it using MD5 and Safer encryption algorithms. Once the image is hashed, we return the results in our output JSON.

Video

You can also send videos into the API. We use MD5 hashes and Safer’s proprietary perceptual hashing  for videos as well. However, they have different use cases. MD5 will return exact match videos and will only indicate whether the whole video is a known CSAM video.

Additionally, Safer will hash different scenes within the video and will flag those which are known to be violating. Safer scenes are demarcated by a start and end timestamp as shown in the response below. 

Note: For the Safer SSVH, videos are sampled at 1FPS.

How to Hive processes media to match against Thorn's classifier and the format of the response

For more information, you can reference our documents.

Teaming Up For a Safer Internet

CSAM is one of the most pervasive and harmful issues on the internet today. Legal requirements make this problem even harder to tackle, and previous technical solutions required significant integration efforts. But, together with Thorn’s proactive technology, we can respond to this challenge and help make the internet a safer place for everyone.

BACK TO ALL BLOGS

Customizing Hive Moderation Models with AutoML

Hive’s AutoML platform allows anyone the opportunity to create best-in-class machine learning solutions for the particular issues they face. Our platform can create classification and large language models for an endless range of use cases. If you need a model that bears no resemblance whatsoever to any pre-trained model we offer, no problem! We’ll help you build one yourself. 

Hive AutoML uses the same technology behind our industry-leading ML tools to create yours. This way you get the best of both worlds — Hive’s impeccable model performance and a tool custom-built to address your needs.

Hive AutoML for Content Moderation

Today we’ll be focusing on one particular application of our AutoML platform: customizing our moderation models. These models kickstarted our success as a company and are used by many of the largest online platforms in the world. But the moderation guidelines of many sites differ from each other, and sometimes our base moderation models don’t quite fit them. 

With AutoML, you can create your own version of our moderation models by fine-tuning our pre-existing heads or adding new heads entirely. We will then train a version of our high-performing base model with your added data to create a tool that best suits your platform’s moderation process. 

In this blog post, we’ll walk through both how to add more data to an existing Hive moderation head and how to add a new custom moderation head. We’ll demonstrate the former while building a visual moderation model and the latter on a text moderation model. Audio moderation is not currently supported on AutoML.

Building a Visual Moderation Model

Hive AutoML for Visual Moderation allows you to customize our Visual Moderation base model to fit your specific needs. Using your own data, you can add new model heads or fine-tune any of the existing 45+ subclasses that we provide as part of our Visual Moderation tool. A full list of these classes is available here.

For this walkthrough, we’ll be fine-tuning the tobacco head. Our data will thus include images and labels for this head only. The resulting model will include all Hive visual moderation heads, with the tobacco head re-trained to incorporate this new data.

Uploading Your Dataset

Before you start building your model, you first need to upload any datasets you’ll use to the Dataset section of our AutoML platform. For Visual Moderation model training, we require a CSV file with a column for your image data (as publicly accessible image URLs) and an additional column for each head you wish to train.

For this tutorial, we’re going to train using additional data for the tobacco class. The below CSV includes image URLs and a column of labels for that head.

Dataset formatting, images have either “yes_tobacco” or “no_tobacco” labels
Dataset formatting, images have either “yes_tobacco” or “no_tobacco” labels

After you’ve selected your dataset file, you’ll be asked to confirm the column mapping. Make sure the columns of your dataset have been interpreted correctly and that you have the correct format (image or text) selected for each column.

The column mapping confirmation page lets you double check that the data has been processed correctly.
The column mapping confirmation page lets you double check that the data has been processed correctly.

Once you’ve confirmed your mapping, you can preview and edit your data. This page opens automatically after any dataset upload. You will be able to check whether all images were uploaded successfully, view the images themselves, and change their respective labels if desired. You can also add or delete any data that you wish to before you proceed onto model training.

The dataset preview page for an image-based dataset.
The dataset preview page for an image-based dataset.

Creating a Dataset Snapshot

When you’re happy with your dataset, you’ll then need to create a snapshot from it. A snapshot is a point-in-time export of a dataset that validates that dataset for training. Once a snapshot is created, its contents cannot be changed. This means that while you can continue to edit your original dataset, your snapshot will not change along with it — if you make any changes, you’ll need to create a new snapshot after you’re finished with your changes.

The information you’ll be asked to provide when creating a snapshot.
The information you’ll be asked to provide when creating a snapshot.

You can create a snapshot from any live dataset. To do so, simply click the “Create Snapshot” button on that dataset’s detail page. You’ll be prompted to provide some information, most notably which columns to use for image input and data labels. After your snapshot is successfully created, you’re ready to start training!

Creating a New Model

To create a training, you can select the “Create Model” button on the snapshot detail page. You’ll once again be asked to provide several pieces of information, including your model’s name, description, base model, and datasets. Make sure to select “Hive Vision Moderation” under the “Base Model” category as opposed to a general image classification model.

When creating your model, make sure you have the correct model type and base model selected.
When creating your model, make sure you have the correct model type and base model selected.

You can choose to upload a separate test dataset or split off a random section of your training dataset to use instead. If you choose to upload a separate test dataset, this dataset must contain the same heads and classes as your training dataset. After uploading your dataset, you will also need to create a snapshot of that dataset before you begin model training.

If you choose to split off a section of your training dataset, you will be able to choose the percentage of that dataset that you would like to use for testing as you create your training.

Before you begin your training, you are also able to edit some training preferences such as maximum number of training epochs, model selection rule, model selection label, early stopping, and invalid data criteria. If you’re unsure what any of these options are, there is a little information icon next to each that will explain what is meant by that setting.

The training options you’re offered as you create your model include max epochs, model selection rule, and more.
The training options you’re offered as you create your model include max epochs, model selection rule, and more.

After uploading your training (and, if desired, test) dataset and selecting your desired training options, you’re ready to create your model. After you begin training, your model will be ready within 20 minutes. You will automatically be directed to the model’s detail page, where you can watch its progress as it trains.

Playground and Metrics: Evaluating Your Model

When your model has completed its training, the model’s detail page will display a variety of metrics in order to help you analyze your model’s performance. At the top of the page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You can toggle whether these metrics are calculated by head overall or by each class within a head.

The model details page displays performance metrics once the model has completed training.
The model details page displays performance metrics once the model has completed training.

Below these numbers, you’ll also be able to view an interactive precision/recall (PR) curve. This is the gold-standard metric for a classification model and gives you more insight into how your model balances the inherent tradeoff between high precision and high recall.

You’ll then be shown a confusion matrix, which is an exact breakdown of the true positives, false positives, true negatives, and false negatives of the model’s results. This can highlight particular weak spots of your model and potential areas you may want to address with further training. As shown below, our example model has no false positives but several false negatives — images with tobacco that were classified as “no_tobacco.”

 This model’s confusion matrix, which shows that there is an issue with false negatives.
This model’s confusion matrix, which shows that there is an issue with false negatives.

The final section of our metrics page is an area called the “playground.” The playground allows you to test your newly created AutoML model by submitting sample queries and viewing the responses. This feature is another great way to explore the way that your model responds to different kinds of prompts and the areas in which it could improve. You are given 500 free sample queries — beyond that you will be prompted to deploy your model with the cost of each submission charged to your organization’s billing account.

To test our tobacco model, we submitted the following sample image. To the right of it you can see the results for each Hive visual moderation class, including tobacco where it is classified correctly with a perfect confidence score or 1.00.

An example image of a man smoking a cigar and the labels assigned to it by our newly trained moderation model.
An example image of a man smoking a cigar and the labels assigned to it by our newly trained moderation model.

Deploying Your Model

To begin using your model, you can create a deployment from it. This will open the project on Hive Data, where you will be able to upload tasks, view tasks, and access your API key as you would with any other Hive Data project. An AutoML project can have multiple active deployments at one time.

Building a Text Moderation Model

Just like for Visual Moderation, our AutoML platform allows you to customize our Text Moderation base model to fit your particular use cases by adding or re-training model categories. The full class definitions for all 13 of our currently offered heads are available here. For this section of the walkthrough, we will be creating a new custom head in order to add capabilities to our model that we don’t currently offer: sentiment analysis.

Sentiment analysis is the task of categorizing the emotional tone of a piece of text, typically into two labels: positive or negative. Occasionally there may be a sentiment analysis task that breaks the sentiment down into more specific categories, such as joyful, angry, etc. Adding this kind of information to our existing Hive Text Moderation model could prove useful for platforms that wish to either exclude negative content on sites for children or to put limits on certain comment sections or forums where negative commentary is unwanted.

Sentiment analysis is a complex problem, since it is a language-based task. Understanding the meaning and tone of a sentence is not always easy even for humans. To keep it simple, we’ll just be using the two possible classifications of positive and negative.

Uploading Your Dataset

Similarly to creating a Visual Moderation model, you’ll need to upload your data as a CSV file to the “Data” section of our AutoML platform prior to model training. The format of our sentiment analysis dataset is shown below, though the column names do not need to be anything specific in order to be processed correctly.

The text data and labels for our sentiment analysis model, formatted into two columns.
The text data and labels for our sentiment analysis model, formatted into two columns.

After uploading your dataset, you’ll be asked to confirm the format of each column as either text, images, or JSONs. If you’d like to disregard that column entirely, that is also an option to “Ignore Column.” After you hit confirm, you can preview and edit your dataset just as you could with your image dataset in the Visual Moderation example. The preview page for text datasets is shown below.

The preview page for a text-based dataset.
The preview page for a text-based dataset.

Creating a Dataset Snapshot

As described in the Visual Moderation walkthrough, you’ll need to create a snapshot of your dataset in order to validate it prior to model training. When making your snapshot, make sure that you select “Text Classification” as your “Snapshot Type.” This will ensure that your snapshot is sufficient to train a Text Moderation model. You will also need to specify which column contains your text input and which contains the labels for that text input, as shown below for our dataset.

When creating your snapshot, you will be asked to provide some information about the dataset.
When creating your snapshot, you will be asked to provide some information about the dataset.

In the example above, we’ve selected our “text_data” column as our input and our “sentiment” column as our training labels.

Creating a New Model

After you’ve created your snapshot, you’ll automatically be brought to that snapshot’s detail page. From this page, starting a new model training is as easy  — just hit the big “Create New Model” button on the top right. You’ll be asked to name your model and provide a few key details about the training, such as which snapshots you’d like to use as your data and how many times a training will cycle through that data.

You’ll be able to configure your training by choosing a model selection rule, maximum number of epochs, and more.
You’ll be able to configure your training by choosing a model selection rule, maximum number of epochs, and more.

Make sure you’ve selected “Text Classification” as your model type and “Hive Text Moderation” as your base model. Then you’re ready to start your training! Model training takes up to 20 minutes depending on several factors including the size of your dataset. Most take only several minutes to complete.

Metrics and Model Evaluation

Once your training has completed, you’ll be redirected to the details page for your new moderation model. On this page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You will also be able to view a precision/recall (P/R) curve and confusion matrix in order to further analyze the performance of your model.

The sentiment analysis model performs fairly well upon first training, with most metrics around 86%.
The sentiment analysis model performs fairly well upon first training, with most metrics around 86%.

The overall performance of the model is pretty good for a difficult task such as sentiment analysis. While there is room for improvement, this first round of training indicates that with some additional data we could likely bring all metrics above 90%. The confusion matrix for this model indicates that a specific area of weakness for this model is false negatives, to which a possible solution would be to increase the amount of positive examples in the data and observe if this improves model results.

The confusion matrix for our model, which shows a 19% false negative rate.
The confusion matrix for our model, which shows a 19% false negative rate.

We do not currently offer the playground feature for text moderation models, though we are working on this and expect it to be released in the coming months.

Deploying Your Model

The process for deploying your model is identical to the way we deployed our Visual Moderation model in the first example. To deploy any model, simply click “Create Deployment” from that model’s details page. Once deployed, you can access your unique API keys and begin to submit tasks to the model like any other Hive model.

Final Thoughts

We hope this in-depth walkthrough was helpful. If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.

BACK TO ALL BLOGS

How to Train Models with Hive AutoML

What is Hive AutoML?

Hive’s AutoML platform allows you to quickly train, evaluate, and deploy machine learning models for your own custom use cases. The process is simple — just select your desired model type, upload your datasets, and you’re ready to begin training! 

Since we announced the initial release of our AutoML platform, we’ve added support for Large Language Model training. Now you can build everything from classification models to chatbots, all in the same intuitive platform. To illustrate how easy the model-building process is, we’ll walk through it step-by-step with each type of model. We’ll also provide a link to the publicly available dataset we used as an example so that you can follow along.

Training an Image Classification Model

First we’re going to create an Image Classification model. This type of model is used to identify certain subjects, settings, and other visual attributes in both images and videos. For this example, we’ll be using a snacks dataset to identify 20 different kinds of food (strawberries, apples, hot dogs, cupcakes, etc.). To follow along with this walkthrough, first download the images from this dataset, which are sorted into separate files for each label.

Formatting the Datasets

After downloading the image data, we’ll need to put this data in the correct format for our AutoML training. For Image Classification datasets, the platform requires a CSV file that contains one column for image URLs titled “image_url” and up to 20 other columns for the classification categories you wish to use. This requires creating publicly accessible links for each image in the dataset. For this example, all 20 of our food categories will be part of the same head — food type. To do this, we formatted our CSV as follows:

The snacks dataset in the correct format for our AutoML platform
The snacks dataset in the correct format for our AutoML platform

This particular dataset is within the size limitations for Image Classification datasets. When uploading your own dataset, it is crucial that you ensure it meets all of the sizing requirements and other specifications or the dataset upload will fail. These requirements can be found in our AutoML documentation.

Both test and validation datasets are provided as part of the snacks dataset. When using your own datasets, you can choose to upload a test dataset or to split off a random section of your training data to use instead. If you choose the latter, you will be able to select what percentage of that data you want you use as test data as you create your training.

Uploading the Datasets

Before we start building the model, we first need to upload both our training and test datasets to the “Datasets” section of our AutoML platform. This part of our platform validates each dataset before it can be used for training as well as stores all datasets to be easily accessed for future models. We’ll upload both the training and test datasets separately, naming them Snacks (Train) and Snacks (Test) respectively.

Creating a Training

To start building your model, we’ll head to our AutoML platform and select the “Create New Model” button. We’ll then be brought to a project setup page where we will be prompted to enter a project name and description. For Model Type, we’ll select “Image Classification.” On the right side of the screen, we can add our training dataset by selecting from our dataset library. We’ll select the datasets called Snacks (Train) and Snacks (Test) that we just uploaded.

The “Create New Model” page
The “Create New Model” page

And just like that, we’re ready to start training our model! To begin the training process, we’ll click the “Start Training Model” button. The model’s status will then shift to “Queued” and then “In Progress” while we train the model. This will likely take several minutes. When training is complete, the status will display as “Completed.”

Evaluating the Model

After model training is complete, the page for that project will show various performance metrics so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.

The model’s project page after training has completed
The model’s project page after training has completed

Below that, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that two images of cupcakes were incorrectly classified as cookies — an understandable mistake as the two are both decorated desserts.

The confusion matrix for our snacks model
The confusion matrix for our snacks model

These detailed metrics can help us to know what categories to target if we want to train a better version of the model. If you would like to retrain your model, you can also click the “Update Model” to begin the training process again.

Deploying the Model

Even after the first time training this model, we’re pretty happy with how it turned out. We’re ready to deploy the model and start using it. To deploy, select the project and click the “Create Deployment” button in the top right corner. The project’s status will shift to “Deploying.” The deployment may take a few minutes.

Submitting Tasks via API

After the deployment is complete, we’re ready to start submitting tasks via API as we would any pre-trained Hive model. We can click on the name of any individual deployment to open the project on Hive Data, where we can upload tasks, view tasks, and access our API key. There is also a button to “Undeploy” the project, if we wish to deactivate it at any point. Undeploying a model is not permanent — we can redeploy the project if we later choose to.

To see a video of the entire training and deployment process for an Image Classification model, head over to our Youtube channel.

Training a Text Classification Model

We’ll now walk through that same training process in order to build a Text Classification model, but with a few small differences. Text classification models can be used to sort and tag text content by topic, tone, and more. For this example, we’ll use the Twitter Sentiment Analysis dataset posted by user carblacac on Hugging Face. This dataset consists of a series of short text posts originally published to Twitter and whether they have a negative (0) or positive (1) overall sentiment. To follow along with this walkthrough, you can download the dataset here.

Formatting the Datasets

For Text Classification datasets, our AutoML platform requires a CSV with the text data in a column titled “text_data” and up to 20 other columns that each represent classification categories, also called model heads. Using the Twitter Sentiment Analysis dataset, we only need to rename the columns like so:

Our Twitter Sentiment Analysis data formatted correctly for our AutoML platform
Our Twitter Sentiment Analysis data formatted correctly for our AutoML platform

The data consists of two sets, a training set with 150k examples and a test set with 62k examples. Before we upload our dataset, however, we must ensure that it fits our Text Classification dataset requirements. In the case of the training set, it does not fit those requirements — our AutoML platform only accepts CSV files that have 100,000 rows or less and this one has 150,000. In order to use this dataset, we’ll have to remove some examples from the set. In order to keep the number of examples for each class relatively equal, we removed 25,000 negative (0) examples and 25,000 positive (1) ones.

Uploading the Datasets

After fixing the size issue, we’re ready to upload our datasets. As is the case with all model types, we must first upload any datasets we are going to use before we create our training.

Creating a Training

After both the training and test datasets have been validated, we’re ready to start building your model. On our AutoML platform, we’ll click the “Create New Model” button and enter a project name and description. For our model type, this time we’ll select “Text Classification.” Finally, we’ll add our training and test datasets that we just uploaded.

We’re then ready to start training! This aspect of the training process is identical to the one shown above for an Image Classification model. Just click the “Start Training Model” button on the bottom right corner of the screen. When training is complete, the status will display as “Completed.”

Evaluating the Model

Just like in our Image Classification example, the project page will show various performance metrics after training is complete so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.

The project page for our Twitter Sentiment Analysis model after it has completed training
The project page for our Twitter Sentiment Analysis model after it has completed training

Below the precision, recall, and balanced accuracy, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that while there were a fair amount of mistakes for each class, there were more cases in which a positive example was mistaken for a negative than the other way around. 

While the results of this training are not as good as our Image Classification example, this is somewhat expected — sentiment analysis is a more complex and difficult classification task. While this model could definitely be improved by retraining with slightly different data, we’ll demonstrate how to deploy it. To retrain your model, however, all you need to do is click the “Update Model” button and begin the training process again.

Deploying the Model

Deploying your model is the exact same process as described above in the Image Classification example. After the deployment is complete, you’ll be able to view the deployment on Hive Data and access the API keys needed in order to begin using the model. 

To see a video of the entire training and deployment process for a Text Classification model, head over to our Youtube channel.

Training a Large Language Model

Finally, we’ll walk through the training process for a Large Language Model (LLM). This process is slightly different from the training process for our classification model types, both in terms of dataset formatting and model evaluation.
Our AutoML platform supports two different types of LLMs: Text and Chat. Text models are geared towards generating passages of writing or lines of code, whereas chat models are built for interactions with the user, often in the format of asking questions and receiving concise, factual answers. For this example, we’ll be using the Viggo dataset uploaded by GEM to Hugging Face. To follow along with us as we build the model, you can download the training and test sets here.

Formatting the Datasets

This dataset supports the task of summarizing and restructuring text into a very specific syntax. All data is within the video game domain, and all prompts take the form of either questions or statements about various games. The goal of the model is to take these prompts, extract the main idea behind them, and reformat them. For example, the prompt “Guitar Hero: Smash Hits launched in 2009 but plays like a game from 1989, it’s just not good” becomes “give_opinion(name[Guitar Hero: Smash Hits], release_year[2009], rating[poor]).”

First, we’ll check to make sure this dataset is valid per our guidelines for AutoML datasets. The size is well under the limit of 50,000 rows with only around 5,000. All that needs to be done to make sure that the formatting is correct is make sure that the prompt is in a column titled “prompt” and the expected completion is in another column titled “completion.” All other columns can be removed. From this dataset, we will use the column “target” as “prompt” and the column “meaning_representation” as “completion.” The final CSV is as shown below:

The Viggo dataset ready to upload to our AutoML platform
The Viggo dataset ready to upload to our AutoML platform

Uploading the Datasets

Now let’s upload our datasets. We’ll be using both the training and test datasets from the Viggo dataset as provided here. After both datasets have been validated, we’re ready to train the model.

Creating a Training

We’ll head back to our Models page and select “Create New Model”. This time, the project type should be “Language Generative – Text”. We will then choose our training and test datasets from a list of ones that we’ve already uploaded to the platform. Then we’ll start the training!

Evaluating the Model

For Large Language Models, the metrics page looks a little different than it does for our classification models.

The project page for the Viggo model after it has completed training

The loss measures how closely the model’s response matches the response from the test data, where 0 represents a perfect prediction, and a higher loss signifies that the prediction is increasingly far from the actual response sequence. If the response has 10 tokens, we let the model predict each of the 10 tokens given all previous tokens are the same and display the final numerical loss value.

You can also evaluate your model by interacting with it in what we call the playground. Here you can submit prompts directly to your model and view its response, allowing model evaluation through experimentation. This will be available for 15 days after model training is complete, and has a limit of 500 requests. If either the time or request limit is reached, you can instead choose to deploy the model and continue to use the playground feature with unlimited uses which will be charged to the organization’s billing account.

For our Viggo model, all metrics are looking pretty good. We entered a few prompts into the playground to further test it, and the results showed no issues.

An example query and response from the playground feature

Deploying the Model

The process to deploy a Large Language Model is the same as it is for our classification models. Just click “Create Deployment” and you’ll be ready to submit API requests in just a few short minutes.

To see a video of the entire training and deployment process for an LLM, head over to our Youtube channel.

Final Thoughts

We hope this in-depth walkthrough of how to build different types of machine learning models with our AutoML platform was helpful. Keep an eye out for more AutoML tutorials in the coming weeks, such as a detailed guide to Retrieval Augmented Generation (RAG), data stream management systems (DSMS), and other exciting features we support.

If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.

Dataset Sources

All datasets that are linked to as examples in this post are publicly available for a wide range of uses, including commercial use. The snacks dataset and viggo dataset are both licensed under a Creative Commons Attribution Share-Alike 4.0 (CC BY-SA 4.0) license. They can be found on Hugging Face here and here. The Twitter Sentiment Analysis dataset is licensed under the Apache License, Version 2.0. It is available on Hugging Face here. None of these datasets may be used except in compliance with their respective license agreements.

BACK TO ALL BLOGS

Best-in-Class: Hive Model Benchmarks

What does it mean to be “best-in-class”?

We often refer to our models as “industry-leading” or “best-in-class,” but what does this actually mean in practice? How are we better than our competitors, and by how much? It is easy to throw these terms around, but we mean it — and we have the evidence to back it up. In this blog post, we’ll be walking through some of the benchmarks that we have run against similar products to show how our models outperform the competition.

Visual Moderation

First, let’s take a look at one of our oldest and most popular models: visual moderation. To compare our model to its major competitors, we ran a test set of NSFW, suggestive, and clean images through all models.

Visual moderation is a classification task — in other words, the model’s job is to classify each submitted image into one of several categories (in this case, NSFW or Clean). A popular and effective metric to measure performance in classification models is by looking at their precision and recall. Precision is the number of true positives (i.e., correctly identified NSFW images) over the number of predicted positives (images predicted to be NSFW). Recall is the number of true positives (correctly identified NSFW images) over the number of ground-truth positives (actual NSFW images). 

There is a tradeoff between the two. If you predict all images to be NSFW, you will have perfect recall — you caught all the NSFW images! — but horrible precision because you incorrectly classified many clean images as NSFW. The goal is to have both high recall and high precision, no matter what confidence threshold is used.

With our visual moderation models, we’ve achieved this. We plotted the results of our test as a precision/recall curve, showing that even at high recall we maintain high precision and vice versa while our competitors fall behind us.

The above plot is for NSFW content detection. Our precision at 90% recall is nearly perfect at 99.6%, which makes our error rate a whopping 45 times lower than Public Cloud C. Even Public Clouds A and B, which are closer to us in performance, have error rates 12.5 times higher and 22.5 times higher than ours respectively.

We also benchmarked our model for suggestive content detection, or content that is inappropriate but not as explicit as our NSFW category. Hive’s error rate remains far below the other models, resting at 6 times lower than Public Cloud A and 12 times lower than Public Cloud C. Public Cloud B did not offer a similar category and thus could not be compared.

We only ran our test on NSFW/explicit imagery more broadly because our competitors do not have equivalent classes to ours for other visual moderation classes such as drugs, gore, and terrorism. This makes comparisons difficult, though it also in itself speaks to the fact that we offer far more classes than many of our competitors. With more than 90 subclasses, our visual moderation model far exceeds its peers in terms of the granularity of our results — we don’t just have classes for NSFW, but also for nudity, underwear, cleavage, and other smaller categories that offer our customers a more more in-depth understanding of their content.

Text Moderation

We used precision/recall curves to compare our text moderation model as well. For this comparison, we charted our performance across eight different classes. Hive outperforms all peer models on every single one.

Hive’s error rate on sexual content is 4 times lower than its closest competitor, Public Cloud B. Our other two competitors for that class both have error rates 6 times higher. The threat class boasts similar metrics, with Hive’s error rate between 2 and 4 times lower than all its peers.

Hive’s model for hateful content detection is on par with our competitors, remaining slightly ahead on all thresholds. Our model for bullying content does the same, with an error rate 2 times lower than all comparable models.

Hive is one of few companies to offer text moderation for drugs and weapons, and our error rates here are also worth noting — our only competitor has an error rate 4 and 8 times higher than ours for drugs and weapons respectively.

Hive also offers the child exploitation class, one that few others provide. With this class, we achieve an error rate 8 times lower than our only other major competitor.

Audio Moderation

For Audio Moderation, we evaluate our model using word error rate (WER), which is the gold-standard metric for a speech recognition system. Word error rate is the number of errors divided by the total number of words transcribed, and a perfect word error rate is 0. As you can see, we achieve the best or near-best performance across a variety of languages.


We excel across the board, with the lowest word error rate on the majority of the languages offered. On Spanish in particular, our word error rate is more than 4 times lower than Public Cloud B.

For German and Italian we are very close behind Public Cloud C and remain better than all other competitors.

Optical Character Recognition (OCR)

To benchmark our OCR model, we calculated the F-score for our model as well as several of our competitors. F-score is the harmonic mean of a model’s precision and recall, combining both of them into one measurement. A perfect F-score is 1. When comparing general F-scores, Hive excels as shown below.

We also achieve best-in-class or near-best performance when comparing by language, as shown in the graphs below. With some languages, we excel by quite a large margin. For Chinese and Korean in particular, Hive’s F-score is more than twice all of its competitors. We fall slightly behind in Hindi, yet still perform significantly better than Public Cloud A.

Demographics

We evaluated our age prediction model by calculating mean error, or how far off our age predictions were from the truth. Since the test dataset we used is labeled using age ranges and not individual numbers, mean error is defined as the distance in years from the closest end of the correct age range (i.e., guessing 22 for someone in the range 25-30 is an error of 3 years). A perfect mean error is 0.

As you can see from this distribution, Hive has a significantly lower mean error rate in the three lowest age buckets (0-2, 3-9, and 10-19). In the age range 0-2, our mean error rate is 11 times less than Public Cloud A’s. For the range 3-9 and 10-19, that difference becomes 5 times greater and 3 times greater respectively — still quite a large margin. Hive also excels notably at the oldest age bucket (70+), where our mean error rate is nearly 7 times less than Public Cloud A’s.

For a broader analysis, we compared our overall mean error across all age buckets, as well as the accuracy of our gender predictions.

AutoML

One of the newest additions to our product suite, our AutoML platform allows you to train image classification, text classification, and fine-tune large language models with your own custom datasets. To evaluate the effectiveness of this tool, we used the same test set to train models both on our platform and on competitor’s platforms and measured the performance of the resulting model. 

For image classification, we used three different classification tasks to account for the fact that different tasks have different levels of inherent difficulty and thus may yield higher or lower performing models. We also used three different dataset sizes for each classification task in order to measure how well the AutoML platform is able to work with limited amounts of examples.

We compared the resulting models using balanced accuracy, which is the arithmetic mean of a model’s true positive rate and true negative rate. A perfect balanced accuracy is 100%.

As shown in the above tables, Hive achieves best or near-best accuracy across all sets. Our results are quite similar to Public Cloud B’s, pulling ahead on the product dataset. We fell to near-best performance on the smoking dataset, which is the most difficult of the three classification tasks. Even then, we remained within a few percentage points of the winner, Public Cloud B.

For text classification, we trained models for three different categories: sexual content, drugs, and bullying. The results are in the table below. Hive outperforms all competitors on all three categories using all dataset sizes.

Another important consideration when it comes to AutoML is training time. An AutoML tool could build accurate models, but if it takes an entire day to do so it still may not be a great solution. We compared the time it took to train Hive’s text classification tool for the drugs category, and found that our platform was able to train the model 10 times as fast as Private Company A and 32 times as fast as Public Cloud B. And for the smallest dataset size of 100 examples, we trained the model 18 times faster than Private Company A and 268 times faster than Public Cloud B. That’s a pretty significant speedup.

Measuring the performance of fine-tuned LLMs on our foundation model is a bit more complicated. Here we evaluate two different tasks: question answering and closed-domain classification. 

To measure performance on the question answering task, we used a metric called token accuracy. Token accuracy indicates how many tokens are the same between the model’s response and the expected response from the test set. A perfect token accuracy is 100%. As shown below, our token accuracy is higher than our competitors or around the same for all dataset sizes.

This is also true for the classification task, where maintained roughly the same performance as Public Cloud A across the various dataset sizes. Below are the full results of our comparison.

Final Thoughts

As illustrated throughout this in-depth look into the performance of our models, we truly earn the title “best-in-class.” We conduct these benchmarks not just to justify that title, but more so as part of our constant effort to make our models the best that they can be. Reviewing these analyses helps us to identify our strengths, yes, but also our weaknesses and where we can improve.

If you have any questions about any of the benchmarks we’ve discussed here or any other questions about our models, please don’t hesitate to reach out to us at sales@thehive.ai.

BACK TO ALL BLOGS

3 Tips and Tricks to Building ML Models

Hive was thrilled to have our CTO Dmitriy present at the Workshop on Multimodal Content Moderation during CVPR last week, where we provided an overview of a few important considerations when building machine learning models for classification tasks. What are the effects of data quantity and quality on model performance? Can we use synthetic data in the absence of real data? And after model training is done, how do we spot and address bias in the model’s performance?

Read on to learn some of the research that has made our models truly best-in-class.

The Importance of Quality Data

Data is, of course, a crucial component in machine learning. Without data, models would have no examples to learn from. It is widely accepted in the field that the more data you train a machine learning model with, the better. Similarly, the cleaner that data is, the better. This is fairly intuitive — the basic principle is true for human learners, too. The more examples to learn from, the easier it is to learn. And if those examples aren’t very good? Learning becomes more difficult.

But how important is good, clean data to building a good machine learning model? Good data is not always easy to come by. Is it better to use more data at the expense of having more noise? 

To investigate this, we trained a binary image classifier to detect NSFW content, varying the amount of data between 10 images and 100k images. We also varied the noise by flipping a percentage of the labels on between 0% and 50% of the data. We then plotted the balanced accuracy of the resulting models using the same test set. 

The result? It turns out that data quality is more important than we may think. It was clear that, as expected, accuracy was the best when the data was both as large as possible (100k examples) and as clean as possible (0% noise). From there, however, the table gets more interesting.

As seen above, the model trained with only 10k data and no noise performs better than the model trained with ten times as much data at 100k and 10% noise. The general trend appears to be similar — clean data matters very much, and it can quickly tank performance even when using the maximum amount of data. In other words, less data is sometimes preferable to more data if it is cleaner.

We wondered how this would change with a more detailed classification problem, so we built a new binary image classifier. This time, we trained the model to detect images of smoking, which is detecting signal from a small part of an image. 

The outcome, shown below, echoes the results from the NSFW model — clean data has a great impact on performance even with a very large dataset. But the quantity of data appears to be more important than it was in the NSFW model. While 5000 examples with no noise got around 90% balanced accuracy for the NSFW model, that same amount of noiseless data only got around 77% for the smoking classifier. The increase in performance, while still strongly tied to data quantity, was noticeably slower and only the largest datasets produced well-performing models.

It makes sense that quantity of data would be more important with a more difficult classification task. Data noise also remained a crucial factor for the models trained with more data — the 50k model with 10% noise performed about the same as the 100k model with 10% noise, illustrating once more that more data is not always better if it is still noisy.

Our general takeaways here are that while both data quality and quantity matter quite a bit, clean data is more important beyond a certain quantity threshold. This threshold is where performance increases begin to plateau as the data grows larger, yet noisy data continues to have significant effects on model quality. And as we saw by comparing the NSFW model and the smoking one, this quality threshold also changes depending on the difficulty of the classification task itself.

Training on Synthetic Data: Does it Help or Hurt?

So having lots of clean data is important, what can be done when good data is hard to find or costly to acquire? With the rise of AI image generation over the past few years, more and more companies have been experimenting with generated images to supplement visual datasets. Can this kind of synthetic data be used to train visual classification models that will eventually classify real data?

In order to try this out, we trained five different binary classification models to detect smoking. Three of the models were trained exclusively with real data (10k, 20k, and 40k examples respectively), one was a mix of real and synthetic images (10k real and 30k synthetic), and one was trained entirely on synthetic data (40k). Each datatest had an even split of 50% smoking and 50% nonsmoking examples. To evaluate the models, we used two balanced test sets: one with 4k real images and one with 4k of synthetic images. All synetic images were created using Stable Diffusion.

Looking at the precision and recall curves for the various models, we made an interesting discovery. Unsurprisingly, the largest of the entirely real datasets performed the best (40k). The one trained on 10k real images and 30k synthetic images performed significantly better than the one trained only on 10k real images.

These results suggest that while large amounts of real data are best, a mixture of synthetic and real data could in fact boost model performance when little data is available.

Keeping an Eye Out For Bias

After model training is finished, extensive testing must be done in order to make sure there aren’t any biases in the model results. Biases can come in the form of biases that exist in the real world and are thus often ingrained in real-world data, such as racial bias or gender bias, but can also come in the form of biases that occur in the data by coincidence. 

A great example of how unpredictable certain biases can be came recently during a model training for NSFW detection, where the model started flagging many pictures of computer keyboards as false positives. Upon closer investigation, this occurred because many of the NSFW pictures in our training data were photos of computers whose screens were displaying explicit content. Since the computer screens were the focus of these images, keyboards were also often included, leading to the false association that keyboards are an indicator of NSFW imagery.

Three images that were falsely categorized as NSFW
Three images that were falsely categorized as NSFW

In order to correct this bias, we added more non-NSFW keyboard examples to the training data. Improving this bias in this way not only helps the model by addressing the bias itself, but also boosts general model performance. Of course, addressing bias is even more critical when dealing with data that carries current or historical biases against minority groups, thereby perpetuating them by ingraining them into future technology. The importance of detecting and correcting these biases cannot be overstated, since leaving them unaddressed carries a significant amount of risk beyond simply calling a keyboard NSFW.

Regardless of the type of bias, it’s important to note that biases aren’t always readily apparent. The original model prior to addressing the bias had a balanced accuracy of 80%, which is high enough that the bias may not have been immediately noticeable since errors weren’t extremely frequent. The takeaway here is thus not just that bias correction matters, but that looking into potential biases is necessary even when you might not think they’re there.

Takeaways

Visual classification models are in many ways the heart of Hive — they were our main launching point into the space of content moderation and AI-powered APIs more broadly. We’re continuously searching for ways to keep improving these models as the research surrounding them grows and evolves. Conclusions like those discussed here — the importance of clean data, particularly when you have lots of it, the possible use of synthetic data when real data is lacking, and the need to find and correct all biases (don’t forget about the unexpected ones!) — greatly inform the way we build and maintain our products.

BACK TO ALL BLOGS

Build Your Own Custom ML Models with Hive AutoML

We’re excited to announce Hive’s new AutoML tool that provides customers with everything they need to train, evaluate, and deploy customized machine learning models. 

Our pre-trained models solve a wide range of use cases, but we will always be bounded by the number of models we can build. Now customers who find that their unique needs and moderation guidelines don’t quite match with any of our existing solutions can create their own, custom-built for their platform and easily accessible via API.

AutoML can be used to augment our current offerings or to create new models entirely. Want to flag a particular subject that doesn’t exist as a head in our Text Moderation API, or a certain symbol or action that isn’t part of our Visual Moderation? With AutoML, you can quickly build solutions for these problems that are already integrated with your Hive workflow.

Let’s walk through our AutoML process to illustrate how it works. In this example, we’ll build a text classification model that can determine whether or not a given news headline is satirical. 

First, we need to get our data in the proper format. For text classification models, all dataset files must be in CSV format. One column should contain the text data (titled text_data) and all other columns represent model heads (classification categories). The values within each row of any given column represent the classes (possible classifications) within that head. An example of this formatting for our satire model is shown below:

The first page you’ll see on Hive’s AutoML platform is a dashboard with all of your organization’s training projects. In the image below, you’ll see how the training and deployment status of old projects are displayed. To create our satire classifier, we’re going to make a new project by hitting the “Create New Project” button in the top right corner.

We’ll then be prompted to provide a name and description for the project as well as training data in the form of a CSV file. For test data, you can either upload a separate CSV file or choose to randomly split your training data into two files, one to be used for training and the other for testing. If you decide to split your data, you will be able to choose the percentage that you would like to split off.

After all of that is entered, we are ready to train! Beginning model training is as easy as hitting a single button. While your model trains, you can easily view its training status on the Training Projects page.

Once training is completed, your project page will show an analysis of the model’s performance. The boxes at the top allow you to decide if you want to look at this analysis for a particular class or overall. If you’re building a multi-headed model, you can choose which head you’d like to evaluate as well. We provide precision, recall, and balanced accuracy for all confidence thresholds as well as a PR curve. We also display a confusion matrix to show how many predictions were correct and incorrect per class.

Once you’re satisfied with your model’s performance, select the “Create Deployment” to launch the model. Similarly to model training, deployment will take a few moments. After model deployment is complete, you can view the deployment in your Hive customer dashboard, where you can access your API key, view current tasks, as well as access other information as you would with our pre-trained models.

We’re very excited to be adding AutoML to our offerings. The platform currently supports both text and image classification, and we’re working to add support for large language models next. If you’d like to learn more about our AutoML platform and other solutions we’re building, please feel free to reach out to sales@thehive.ai or contact us here.