BACK TO ALL BLOGS

Customizing Hive Moderation Models with AutoML

Hive’s AutoML platform allows anyone the opportunity to create best-in-class machine learning solutions for the particular issues they face. Our platform can create classification and large language models for an endless range of use cases. If you need a model that bears no resemblance whatsoever to any pre-trained model we offer, no problem! We’ll help you build one yourself. 

Hive AutoML uses the same technology behind our industry-leading ML tools to create yours. This way you get the best of both worlds — Hive’s impeccable model performance and a tool custom-built to address your needs.

Hive AutoML for Content Moderation

Today we’ll be focusing on one particular application of our AutoML platform: customizing our moderation models. These models kickstarted our success as a company and are used by many of the largest online platforms in the world. But the moderation guidelines of many sites differ from each other, and sometimes our base moderation models don’t quite fit them. 

With AutoML, you can create your own version of our moderation models by fine-tuning our pre-existing heads or adding new heads entirely. We will then train a version of our high-performing base model with your added data to create a tool that best suits your platform’s moderation process. 

In this blog post, we’ll walk through both how to add more data to an existing Hive moderation head and how to add a new custom moderation head. We’ll demonstrate the former while building a visual moderation model and the latter on a text moderation model. Audio moderation is not currently supported on AutoML.

Building a Visual Moderation Model

Hive AutoML for Visual Moderation allows you to customize our Visual Moderation base model to fit your specific needs. Using your own data, you can add new model heads or fine-tune any of the existing 45+ subclasses that we provide as part of our Visual Moderation tool. A full list of these classes is available here.

For this walkthrough, we’ll be fine-tuning the tobacco head. Our data will thus include images and labels for this head only. The resulting model will include all Hive visual moderation heads, with the tobacco head re-trained to incorporate this new data.

Uploading Your Dataset

Before you start building your model, you first need to upload any datasets you’ll use to the Dataset section of our AutoML platform. For Visual Moderation model training, we require a CSV file with a column for your image data (as publicly accessible image URLs) and an additional column for each head you wish to train.

For this tutorial, we’re going to train using additional data for the tobacco class. The below CSV includes image URLs and a column of labels for that head.

Dataset formatting, images have either “yes_tobacco” or “no_tobacco” labels
Dataset formatting, images have either “yes_tobacco” or “no_tobacco” labels

After you’ve selected your dataset file, you’ll be asked to confirm the column mapping. Make sure the columns of your dataset have been interpreted correctly and that you have the correct format (image or text) selected for each column.

The column mapping confirmation page lets you double check that the data has been processed correctly.
The column mapping confirmation page lets you double check that the data has been processed correctly.

Once you’ve confirmed your mapping, you can preview and edit your data. This page opens automatically after any dataset upload. You will be able to check whether all images were uploaded successfully, view the images themselves, and change their respective labels if desired. You can also add or delete any data that you wish to before you proceed onto model training.

The dataset preview page for an image-based dataset.
The dataset preview page for an image-based dataset.

Creating a Dataset Snapshot

When you’re happy with your dataset, you’ll then need to create a snapshot from it. A snapshot is a point-in-time export of a dataset that validates that dataset for training. Once a snapshot is created, its contents cannot be changed. This means that while you can continue to edit your original dataset, your snapshot will not change along with it — if you make any changes, you’ll need to create a new snapshot after you’re finished with your changes.

The information you’ll be asked to provide when creating a snapshot.
The information you’ll be asked to provide when creating a snapshot.

You can create a snapshot from any live dataset. To do so, simply click the “Create Snapshot” button on that dataset’s detail page. You’ll be prompted to provide some information, most notably which columns to use for image input and data labels. After your snapshot is successfully created, you’re ready to start training!

Creating a New Model

To create a training, you can select the “Create Model” button on the snapshot detail page. You’ll once again be asked to provide several pieces of information, including your model’s name, description, base model, and datasets. Make sure to select “Hive Vision Moderation” under the “Base Model” category as opposed to a general image classification model.

When creating your model, make sure you have the correct model type and base model selected.
When creating your model, make sure you have the correct model type and base model selected.

You can choose to upload a separate test dataset or split off a random section of your training dataset to use instead. If you choose to upload a separate test dataset, this dataset must contain the same heads and classes as your training dataset. After uploading your dataset, you will also need to create a snapshot of that dataset before you begin model training.

If you choose to split off a section of your training dataset, you will be able to choose the percentage of that dataset that you would like to use for testing as you create your training.

Before you begin your training, you are also able to edit some training preferences such as maximum number of training epochs, model selection rule, model selection label, early stopping, and invalid data criteria. If you’re unsure what any of these options are, there is a little information icon next to each that will explain what is meant by that setting.

The training options you’re offered as you create your model include max epochs, model selection rule, and more.
The training options you’re offered as you create your model include max epochs, model selection rule, and more.

After uploading your training (and, if desired, test) dataset and selecting your desired training options, you’re ready to create your model. After you begin training, your model will be ready within 20 minutes. You will automatically be directed to the model’s detail page, where you can watch its progress as it trains.

Playground and Metrics: Evaluating Your Model

When your model has completed its training, the model’s detail page will display a variety of metrics in order to help you analyze your model’s performance. At the top of the page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You can toggle whether these metrics are calculated by head overall or by each class within a head.

The model details page displays performance metrics once the model has completed training.
The model details page displays performance metrics once the model has completed training.

Below these numbers, you’ll also be able to view an interactive precision/recall (PR) curve. This is the gold-standard metric for a classification model and gives you more insight into how your model balances the inherent tradeoff between high precision and high recall.

You’ll then be shown a confusion matrix, which is an exact breakdown of the true positives, false positives, true negatives, and false negatives of the model’s results. This can highlight particular weak spots of your model and potential areas you may want to address with further training. As shown below, our example model has no false positives but several false negatives — images with tobacco that were classified as “no_tobacco.”

 This model’s confusion matrix, which shows that there is an issue with false negatives.
This model’s confusion matrix, which shows that there is an issue with false negatives.

The final section of our metrics page is an area called the “playground.” The playground allows you to test your newly created AutoML model by submitting sample queries and viewing the responses. This feature is another great way to explore the way that your model responds to different kinds of prompts and the areas in which it could improve. You are given 500 free sample queries — beyond that you will be prompted to deploy your model with the cost of each submission charged to your organization’s billing account.

To test our tobacco model, we submitted the following sample image. To the right of it you can see the results for each Hive visual moderation class, including tobacco where it is classified correctly with a perfect confidence score or 1.00.

An example image of a man smoking a cigar and the labels assigned to it by our newly trained moderation model.
An example image of a man smoking a cigar and the labels assigned to it by our newly trained moderation model.

Deploying Your Model

To begin using your model, you can create a deployment from it. This will open the project on Hive Data, where you will be able to upload tasks, view tasks, and access your API key as you would with any other Hive Data project. An AutoML project can have multiple active deployments at one time.

Building a Text Moderation Model

Just like for Visual Moderation, our AutoML platform allows you to customize our Text Moderation base model to fit your particular use cases by adding or re-training model categories. The full class definitions for all 13 of our currently offered heads are available here. For this section of the walkthrough, we will be creating a new custom head in order to add capabilities to our model that we don’t currently offer: sentiment analysis.

Sentiment analysis is the task of categorizing the emotional tone of a piece of text, typically into two labels: positive or negative. Occasionally there may be a sentiment analysis task that breaks the sentiment down into more specific categories, such as joyful, angry, etc. Adding this kind of information to our existing Hive Text Moderation model could prove useful for platforms that wish to either exclude negative content on sites for children or to put limits on certain comment sections or forums where negative commentary is unwanted.

Sentiment analysis is a complex problem, since it is a language-based task. Understanding the meaning and tone of a sentence is not always easy even for humans. To keep it simple, we’ll just be using the two possible classifications of positive and negative.

Uploading Your Dataset

Similarly to creating a Visual Moderation model, you’ll need to upload your data as a CSV file to the “Data” section of our AutoML platform prior to model training. The format of our sentiment analysis dataset is shown below, though the column names do not need to be anything specific in order to be processed correctly.

The text data and labels for our sentiment analysis model, formatted into two columns.
The text data and labels for our sentiment analysis model, formatted into two columns.

After uploading your dataset, you’ll be asked to confirm the format of each column as either text, images, or JSONs. If you’d like to disregard that column entirely, that is also an option to “Ignore Column.” After you hit confirm, you can preview and edit your dataset just as you could with your image dataset in the Visual Moderation example. The preview page for text datasets is shown below.

The preview page for a text-based dataset.
The preview page for a text-based dataset.

Creating a Dataset Snapshot

As described in the Visual Moderation walkthrough, you’ll need to create a snapshot of your dataset in order to validate it prior to model training. When making your snapshot, make sure that you select “Text Classification” as your “Snapshot Type.” This will ensure that your snapshot is sufficient to train a Text Moderation model. You will also need to specify which column contains your text input and which contains the labels for that text input, as shown below for our dataset.

When creating your snapshot, you will be asked to provide some information about the dataset.
When creating your snapshot, you will be asked to provide some information about the dataset.

In the example above, we’ve selected our “text_data” column as our input and our “sentiment” column as our training labels.

Creating a New Model

After you’ve created your snapshot, you’ll automatically be brought to that snapshot’s detail page. From this page, starting a new model training is as easy  — just hit the big “Create New Model” button on the top right. You’ll be asked to name your model and provide a few key details about the training, such as which snapshots you’d like to use as your data and how many times a training will cycle through that data.

You’ll be able to configure your training by choosing a model selection rule, maximum number of epochs, and more.
You’ll be able to configure your training by choosing a model selection rule, maximum number of epochs, and more.

Make sure you’ve selected “Text Classification” as your model type and “Hive Text Moderation” as your base model. Then you’re ready to start your training! Model training takes up to 20 minutes depending on several factors including the size of your dataset. Most take only several minutes to complete.

Metrics and Model Evaluation

Once your training has completed, you’ll be redirected to the details page for your new moderation model. On this page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You will also be able to view a precision/recall (P/R) curve and confusion matrix in order to further analyze the performance of your model.

The sentiment analysis model performs fairly well upon first training, with most metrics around 86%.
The sentiment analysis model performs fairly well upon first training, with most metrics around 86%.

The overall performance of the model is pretty good for a difficult task such as sentiment analysis. While there is room for improvement, this first round of training indicates that with some additional data we could likely bring all metrics above 90%. The confusion matrix for this model indicates that a specific area of weakness for this model is false negatives, to which a possible solution would be to increase the amount of positive examples in the data and observe if this improves model results.

The confusion matrix for our model, which shows a 19% false negative rate.
The confusion matrix for our model, which shows a 19% false negative rate.

We do not currently offer the playground feature for text moderation models, though we are working on this and expect it to be released in the coming months.

Deploying Your Model

The process for deploying your model is identical to the way we deployed our Visual Moderation model in the first example. To deploy any model, simply click “Create Deployment” from that model’s details page. Once deployed, you can access your unique API keys and begin to submit tasks to the model like any other Hive model.

Final Thoughts

We hope this in-depth walkthrough was helpful. If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.

BACK TO ALL BLOGS

How to Train Models with Hive AutoML

What is Hive AutoML?

Hive’s AutoML platform allows you to quickly train, evaluate, and deploy machine learning models for your own custom use cases. The process is simple — just select your desired model type, upload your datasets, and you’re ready to begin training! 

Since we announced the initial release of our AutoML platform, we’ve added support for Large Language Model training. Now you can build everything from classification models to chatbots, all in the same intuitive platform. To illustrate how easy the model-building process is, we’ll walk through it step-by-step with each type of model. We’ll also provide a link to the publicly available dataset we used as an example so that you can follow along.

Training an Image Classification Model

First we’re going to create an Image Classification model. This type of model is used to identify certain subjects, settings, and other visual attributes in both images and videos. For this example, we’ll be using a snacks dataset to identify 20 different kinds of food (strawberries, apples, hot dogs, cupcakes, etc.). To follow along with this walkthrough, first download the images from this dataset, which are sorted into separate files for each label.

Formatting the Datasets

After downloading the image data, we’ll need to put this data in the correct format for our AutoML training. For Image Classification datasets, the platform requires a CSV file that contains one column for image URLs titled “image_url” and up to 20 other columns for the classification categories you wish to use. This requires creating publicly accessible links for each image in the dataset. For this example, all 20 of our food categories will be part of the same head — food type. To do this, we formatted our CSV as follows:

The snacks dataset in the correct format for our AutoML platform
The snacks dataset in the correct format for our AutoML platform

This particular dataset is within the size limitations for Image Classification datasets. When uploading your own dataset, it is crucial that you ensure it meets all of the sizing requirements and other specifications or the dataset upload will fail. These requirements can be found in our AutoML documentation.

Both test and validation datasets are provided as part of the snacks dataset. When using your own datasets, you can choose to upload a test dataset or to split off a random section of your training data to use instead. If you choose the latter, you will be able to select what percentage of that data you want you use as test data as you create your training.

Uploading the Datasets

Before we start building the model, we first need to upload both our training and test datasets to the “Datasets” section of our AutoML platform. This part of our platform validates each dataset before it can be used for training as well as stores all datasets to be easily accessed for future models. We’ll upload both the training and test datasets separately, naming them Snacks (Train) and Snacks (Test) respectively.

Creating a Training

To start building your model, we’ll head to our AutoML platform and select the “Create New Model” button. We’ll then be brought to a project setup page where we will be prompted to enter a project name and description. For Model Type, we’ll select “Image Classification.” On the right side of the screen, we can add our training dataset by selecting from our dataset library. We’ll select the datasets called Snacks (Train) and Snacks (Test) that we just uploaded.

The “Create New Model” page
The “Create New Model” page

And just like that, we’re ready to start training our model! To begin the training process, we’ll click the “Start Training Model” button. The model’s status will then shift to “Queued” and then “In Progress” while we train the model. This will likely take several minutes. When training is complete, the status will display as “Completed.”

Evaluating the Model

After model training is complete, the page for that project will show various performance metrics so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.

The model’s project page after training has completed
The model’s project page after training has completed

Below that, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that two images of cupcakes were incorrectly classified as cookies — an understandable mistake as the two are both decorated desserts.

The confusion matrix for our snacks model
The confusion matrix for our snacks model

These detailed metrics can help us to know what categories to target if we want to train a better version of the model. If you would like to retrain your model, you can also click the “Update Model” to begin the training process again.

Deploying the Model

Even after the first time training this model, we’re pretty happy with how it turned out. We’re ready to deploy the model and start using it. To deploy, select the project and click the “Create Deployment” button in the top right corner. The project’s status will shift to “Deploying.” The deployment may take a few minutes.

Submitting Tasks via API

After the deployment is complete, we’re ready to start submitting tasks via API as we would any pre-trained Hive model. We can click on the name of any individual deployment to open the project on Hive Data, where we can upload tasks, view tasks, and access our API key. There is also a button to “Undeploy” the project, if we wish to deactivate it at any point. Undeploying a model is not permanent — we can redeploy the project if we later choose to.

To see a video of the entire training and deployment process for an Image Classification model, head over to our Youtube channel.

Training a Text Classification Model

We’ll now walk through that same training process in order to build a Text Classification model, but with a few small differences. Text classification models can be used to sort and tag text content by topic, tone, and more. For this example, we’ll use the Twitter Sentiment Analysis dataset posted by user carblacac on Hugging Face. This dataset consists of a series of short text posts originally published to Twitter and whether they have a negative (0) or positive (1) overall sentiment. To follow along with this walkthrough, you can download the dataset here.

Formatting the Datasets

For Text Classification datasets, our AutoML platform requires a CSV with the text data in a column titled “text_data” and up to 20 other columns that each represent classification categories, also called model heads. Using the Twitter Sentiment Analysis dataset, we only need to rename the columns like so:

Our Twitter Sentiment Analysis data formatted correctly for our AutoML platform
Our Twitter Sentiment Analysis data formatted correctly for our AutoML platform

The data consists of two sets, a training set with 150k examples and a test set with 62k examples. Before we upload our dataset, however, we must ensure that it fits our Text Classification dataset requirements. In the case of the training set, it does not fit those requirements — our AutoML platform only accepts CSV files that have 100,000 rows or less and this one has 150,000. In order to use this dataset, we’ll have to remove some examples from the set. In order to keep the number of examples for each class relatively equal, we removed 25,000 negative (0) examples and 25,000 positive (1) ones.

Uploading the Datasets

After fixing the size issue, we’re ready to upload our datasets. As is the case with all model types, we must first upload any datasets we are going to use before we create our training.

Creating a Training

After both the training and test datasets have been validated, we’re ready to start building your model. On our AutoML platform, we’ll click the “Create New Model” button and enter a project name and description. For our model type, this time we’ll select “Text Classification.” Finally, we’ll add our training and test datasets that we just uploaded.

We’re then ready to start training! This aspect of the training process is identical to the one shown above for an Image Classification model. Just click the “Start Training Model” button on the bottom right corner of the screen. When training is complete, the status will display as “Completed.”

Evaluating the Model

Just like in our Image Classification example, the project page will show various performance metrics after training is complete so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.

The project page for our Twitter Sentiment Analysis model after it has completed training
The project page for our Twitter Sentiment Analysis model after it has completed training

Below the precision, recall, and balanced accuracy, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that while there were a fair amount of mistakes for each class, there were more cases in which a positive example was mistaken for a negative than the other way around. 

While the results of this training are not as good as our Image Classification example, this is somewhat expected — sentiment analysis is a more complex and difficult classification task. While this model could definitely be improved by retraining with slightly different data, we’ll demonstrate how to deploy it. To retrain your model, however, all you need to do is click the “Update Model” button and begin the training process again.

Deploying the Model

Deploying your model is the exact same process as described above in the Image Classification example. After the deployment is complete, you’ll be able to view the deployment on Hive Data and access the API keys needed in order to begin using the model. 

To see a video of the entire training and deployment process for a Text Classification model, head over to our Youtube channel.

Training a Large Language Model

Finally, we’ll walk through the training process for a Large Language Model (LLM). This process is slightly different from the training process for our classification model types, both in terms of dataset formatting and model evaluation.
Our AutoML platform supports two different types of LLMs: Text and Chat. Text models are geared towards generating passages of writing or lines of code, whereas chat models are built for interactions with the user, often in the format of asking questions and receiving concise, factual answers. For this example, we’ll be using the Viggo dataset uploaded by GEM to Hugging Face. To follow along with us as we build the model, you can download the training and test sets here.

Formatting the Datasets

This dataset supports the task of summarizing and restructuring text into a very specific syntax. All data is within the video game domain, and all prompts take the form of either questions or statements about various games. The goal of the model is to take these prompts, extract the main idea behind them, and reformat them. For example, the prompt “Guitar Hero: Smash Hits launched in 2009 but plays like a game from 1989, it’s just not good” becomes “give_opinion(name[Guitar Hero: Smash Hits], release_year[2009], rating[poor]).”

First, we’ll check to make sure this dataset is valid per our guidelines for AutoML datasets. The size is well under the limit of 50,000 rows with only around 5,000. All that needs to be done to make sure that the formatting is correct is make sure that the prompt is in a column titled “prompt” and the expected completion is in another column titled “completion.” All other columns can be removed. From this dataset, we will use the column “target” as “prompt” and the column “meaning_representation” as “completion.” The final CSV is as shown below:

The Viggo dataset ready to upload to our AutoML platform
The Viggo dataset ready to upload to our AutoML platform

Uploading the Datasets

Now let’s upload our datasets. We’ll be using both the training and test datasets from the Viggo dataset as provided here. After both datasets have been validated, we’re ready to train the model.

Creating a Training

We’ll head back to our Models page and select “Create New Model”. This time, the project type should be “Language Generative – Text”. We will then choose our training and test datasets from a list of ones that we’ve already uploaded to the platform. Then we’ll start the training!

Evaluating the Model

For Large Language Models, the metrics page looks a little different than it does for our classification models.

The project page for the Viggo model after it has completed training

The loss measures how closely the model’s response matches the response from the test data, where 0 represents a perfect prediction, and a higher loss signifies that the prediction is increasingly far from the actual response sequence. If the response has 10 tokens, we let the model predict each of the 10 tokens given all previous tokens are the same and display the final numerical loss value.

You can also evaluate your model by interacting with it in what we call the playground. Here you can submit prompts directly to your model and view its response, allowing model evaluation through experimentation. This will be available for 15 days after model training is complete, and has a limit of 500 requests. If either the time or request limit is reached, you can instead choose to deploy the model and continue to use the playground feature with unlimited uses which will be charged to the organization’s billing account.

For our Viggo model, all metrics are looking pretty good. We entered a few prompts into the playground to further test it, and the results showed no issues.

An example query and response from the playground feature

Deploying the Model

The process to deploy a Large Language Model is the same as it is for our classification models. Just click “Create Deployment” and you’ll be ready to submit API requests in just a few short minutes.

To see a video of the entire training and deployment process for an LLM, head over to our Youtube channel.

Final Thoughts

We hope this in-depth walkthrough of how to build different types of machine learning models with our AutoML platform was helpful. Keep an eye out for more AutoML tutorials in the coming weeks, such as a detailed guide to Retrieval Augmented Generation (RAG), data stream management systems (DSMS), and other exciting features we support.

If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.

Dataset Sources

All datasets that are linked to as examples in this post are publicly available for a wide range of uses, including commercial use. The snacks dataset and viggo dataset are both licensed under a Creative Commons Attribution Share-Alike 4.0 (CC BY-SA 4.0) license. They can be found on Hugging Face here and here. The Twitter Sentiment Analysis dataset is licensed under the Apache License, Version 2.0. It is available on Hugging Face here. None of these datasets may be used except in compliance with their respective license agreements.

BACK TO ALL BLOGS

Announcing Our ISO 27001:2022 and SOC Type 2 Certifications

Hive is proud to announce that our information security management system (ISMS) has achieved both ISO 27001:2022 and SOC Type 2 certifications. These certifications demonstrate dedication to maintaining standards of data security and privacy for our customers, partners, and stakeholders.

ISO 2700:2022 is an internationally recognized standard for ISMS created by the International Electrotechnical Commission (IEC) and the International Organization for Standardization (ISO). It provides a systematic approach to managing sensitive information, with a focus on ensuring its confidentiality, integrity, and availability to those authorized to access it. By obtaining this certification, Hive has demonstrated its commitment to implementing security controls and best practices, protecting valuable information assets against a wide range of threats.

In addition to ISO 27001:2022, Hive has also successfully completed the SOC 2 Type II audit. SOC 2, developed by the American Institute of CPAs (AICPA), evaluates an organization’s controls over security, availability, processing integrity, confidentiality, and privacy. This audit consists of an in-depth evaluation of the company’s current security practices over time. The completion of this assessment offers yet another gold-standard security compliance to validate the effectiveness of our security practices.

Earning these certifications has long been a goal for us. As an ML company, we process massive amounts of data through our APIs daily. It is critical that this information is secure, and we’re committed to maintaining the highest level of security management standards possible in order to provide our customers with the assurance that their data is safe with us.

To see our ISO 27001:2022 and SOC Type 2 certifications or to ask any questions about our data security practices, please contact sales@thehive.ai.

BACK TO ALL BLOGS

Introducing Moderation Dashboard: a streamlined interface for content moderation

Over the past few years, Hive’s cloud-based APIs for moderating image, videotext, and audio content have been adopted by hundreds of content platforms, from small communities to the world’s largest and most well-known platforms like Reddit.  

However, not every platform has the resources or interest in building their own software on top of Hive’s APIs to manage their internal moderation workflows.  And since the need for software like this is shared by many platforms, it made sense to build a robust, accessible solution to fill the gap.

Today, we’re announcing the Moderation Dashboard, a no-code interface for your Trust & Safety team to design and execute custom-built moderation workflows on top of Hive’s best-in-class AI models.  For the first time, platforms can access a full-stack, turnkey content moderation solution that’s deployable in hours and accessible via an all-in-one flexible seat-based subscription model.

We’ve spent the last month beta testing the Moderation Dashboard and have received overwhelmingly positive feedback.  Here are a few highlights:

  • “Super simple integration”: customizable actions define how the Moderation Dashboard communicates with your platform
  • “Effortless enforcement”: automating moderation rules in the Moderation Dashboard UI requires zero internal development effort
  • “Streamlined human reviews”: granular policy enforcement settings for borderline content significantly reduced need for human intervention
  • “Flexible” and “Scalable”: easy to add seat licenses as your content or team needs grow, with a stable monthly fee you can plan for

We’re excited by the Moderation Dashboard’s potential to bring industry-leading moderation to more platforms that need it, and look forward to continuing to improve it with updates and new features based on your feedback.

If you want to learn more, the post below highlights how our favorite features work.  You can also read additional technical documentation here.

Easily Connect Moderation Dashboard to Your Application

Moderation Dashboard connects seamlessly to your application’s APIs, allowing you to create custom enforcement actions that can be triggered on posts or users – either manually by a moderator or automatically if content matches your defined rules.

You can create actions within the Moderation Dashboard interface specifying callback URLs that tell the Dashboard API how to communicate with your platform.  When an action triggers, the Moderation Dashboard will ping your callback server with the required metadata so that you can successfully execute the action on the correct user or post within your platform.

Implement Custom Content Moderation Rules

At Hive, we understand that platforms have different content policies and community guidelines. Moderation Dashboard enables you to set up custom rules according to your particular content policies in order to automatically take action on problematic content using Hive model results. 

Moderation Dashboard currently supports access to both our visual moderation model and our text moderation model – you can configure which of over 50 model classes to use for moderation and at what level directly through the dashboard interface. You can easily define sets of classification conditions and specify which of your actions – such as removing a post or banning a user – to take in response, all from within the Moderation Dashboard UI. 

Once configured, Moderation Dashboard can communicate directly with your platform to implement the moderation policy laid out in your rule set. The Dashboard API will automatically trigger the enforcement actions you’ve specified on any submitted content that violates these rules.

Another feature unique to Moderation Dashboard: we keep track of (anonymized) user identifiers to give you insight into high-risk users. You can design rules that account for a user’s post history to take automatic action on problematic users. For example, platforms can identify and ban users with a certain number of flagged posts in a set time period, or with a certain proportion of flagged posts relative to clean content – all according to rules you set in the interface.

Intuitive Adjustment of Model Classification Thresholds

Moderation Dashboard allows you to configure model classification thresholds directly within the interface. You can easily set confidence score cutoffs (for visual) and severity score cutoffs (for text) that tells Hive how to classify content according to your sensitivity around precision and recall.

Streamline Human Review

Hive’s API solutions were generally designed with an eye towards automated content moderation. Historically, this has required our customers to expend some internal development effort to build tools that also allow for human review. Moderation Dashboard closes this loop by allowing custom rules that route certain content to a Review Feed accessible by your human moderation team.

One workflow we expect to see frequently: automating moderation of content that our models classify as clearly harmful, while sending posts with less confident model results to human review. By limiting human review to borderline content and edge cases, platforms can significantly reduce the burden on moderators while also protecting them from viewing the worst content.

Setting Human Review Thresholds

To do this, Moderation Dashboard administrators can set custom score ranges that trigger human review for both visual and text moderation. Content scoring in these ranges will be automatically diverted to the Review Feed for human confirmation. This way, you can focus review from your moderation team on trickier cases, while leaving content that is clearly allowable and clearly harmful to your automated rules. Here’s an example rule that sends text content scored as “controversial” (severity scores of 1 or 2) to the review feed but auto-moderates the most severe cases.

Review Feed Interface for Human Moderators

When your human review rules trigger, Moderation Dashboard will route the post to the Review Feed of one of your moderators, where they can quickly visualize the post and see Hive model predictions to inform a final decision.

For each post, your moderators can select from the moderation actions you’ve set up to implement your content policy. Moderation Dashboard will then ping your callback server with the required information to execute that action, enabling your moderators to take quick action directly within the interface.

Additionally, Moderation Dashboard makes it simple for your Trust & Safety team administrators to onboard and grant review access to additional moderators. Platforms can easily scale their content moderation capabilities to keep up with growth.

Access Clear Intel on Your Content and Users

Beyond individual posts, Moderation Dashboard includes a User Feed that allows your moderators to see detailed post histories of each user that has submitted unsafe content. 

Here, your moderators can access an overview of each user including their total number of posts and the proportion of those posts that triggered your moderation rules. The User Feed also shows each of that user’s posts along with corresponding moderation categories and any corresponding action taken. 

Similarly, Moderation Dashboard makes quality control easy with a Content Feed that displays all posts moderated automatically or through human review. The Content Feed allows you to see your moderation rules in action, including detailed metrics on how Hive models classified each post. From here, administrators supervise human moderation teams for simple QA or further refine thresholds for automated moderation rules.

Effortless Moderation of Spam and Promotions

In addition to model classifications, Moderation Dashboard will also filter incoming text for spam entities – including URLs and personal information such as emails and phone numbers. The Spam Manager interface will aggregate all posts containing the same spam text into a single action item that can be allowed or denied with one click.

With Spam Manager, administrators can also define custom whitelists and blacklists for specific domains and URLs and then set up rules to automatically moderate spam entities in these lists. Finally, Spam Manager provides detailed histories of users that post spam entities for quick identification of bots and promotional accounts, making it easy to keep your platform free of junk content. 

Final Thoughts: The Future of Content Moderation

We’re optimistic that Moderation Dashboard can help platforms of all sizes meet their obligations to keep online environments safe and inclusive. With Moderation Dashboard as a supplement to (or replacement for) internal moderation infrastructure, it’s never been easier for our customers to leverage our top-performing AI models to automate their content policies and increase efficiency of human review. 

Moderation Dashboard is an exciting shift in how we deliver our AI solutions, and this is just the beginning. We’ll be quickly adding additional features and functionality based on customer feedback, so please stay tuned for future announcements.

If you’d like to learn more about Moderation Dashboard or schedule a personal demo, please feel free to contact sales@thehive.ai

BACK TO ALL BLOGS

Hive Completes SOC 2 Type 1 Audit

At Hive, we understand that our customers continually put their trust in us to provide the best quality of service possible. We take this trust seriously and we work hard to ensure that we’re able to provide the highest level of security we can. As a first step to showing our commitment to security, we’re proud to announce that Hive has successfully completed a SOC 2 Type 1 audit with the Trust Service Criteria of Security, Availability, and Confidentiality. SOC 2 is the most accepted information security audit for North America, and we believe that passing this audit reinforces our commitment to maintaining best-in-class internal controls for safeguarding our customers’ data.
We recognize that data security is a critical concern for our customers. This is why we have ingrained security into all of our engineering processes at Hive. With a host of preventative, detective, and restorative measures, we believe we have enabled 360 degrees of security around our infrastructure and critical customer data.

What is a SOC 2 Audit?

The SOC 2 Audit is designed for organizations that provide services to other entities while interacting with their data. It provides a consistent set of criteria by which to measure the security, confidentiality, availability, processing integrity, and/or privacy practices of an organization. An independent third party CPA firm must conduct the audit, after which it issues an audit report with the findings. There are two types of SOC 2 audits conducted:

  • Type 1 – Report on management’s description of a service organization’s system and the suitability of the design of controls.
  • Type 2 – Report on management’s description of a service organization’s system and the suitability of the design and operating effectiveness of controls.

The SOC 2 audit is conducted on an annual basis to measure continued success in the defined criteria.

What’s Next?

We believe that continuous innovation is key to providing the best service possible. As Hive grows in size and complexity, we recognize that it is critical for our security practices to grow as well. On top of continuously monitoring and adapting our security practices, we will move forward with a SOC 2 Type 2 in 2022, and have ISO 27001 on the compliance roadmap.
For more information on our security practices and plans, please contact our security team at security@thehive.ai.

BACK TO ALL BLOGS

Why We Worked with Parler to Implement Effective Content Moderation

Earlier today, The Washington Post published a feature detailing Hive’s work with social network Parler, and the role our content moderation solutions have played in protecting their community from harmful content and, as a result, earning their app reinstatement in Apple’s App Store.

We are proud of this very public endorsement on the quality of our content moderation solutions, but also know that with such a high-profile client use case there may be questions beyond what could be addressed in the article itself about why we decided to work with Parler and what role we play in their solution. For detailed answers to those questions, please see below.

Why did Hive decide to work with Parler?

We believe that every company should have access to best-in-class content moderation capabilities to create a safe environment for their users. While vendors earlier this year terminated their relationships with Parler after believing their services were enabling a toxic environment, we believe our work addresses the core challenge Parler faced and enables a safe community for Parler’s users to engage.

As outlined in our recent Series D funding announcement, our founders’ precursor to Hive was a consumer app business that itself confronted the challenge of moderating content at scale as the platform quickly grew. The lack of available enterprise-grade, pre-trained AI models to support this content moderation use case (and others) eventually inspired an ambitious repositioning of the company around building a portfolio of cloud-based enterprise AI solutions.

Our founders were not alone. Content moderation has since emerged as a key area of growth in Hive’s business, now powering automated content moderation solutions for more than 75 platforms globally, including prominent dating services, video chat applications, verification services, and more. A December 2020 WIRED article detailed the impact of our work with iconic random chat platform Chatroulette.

When Parler approached us for help in implementing a content moderation solution for their community, we did not take the decision lightly. However, after discussion, we aligned on having built this product to provide democratized access to best-in-class content moderation technology. From our founders’ personal experience, we know it is not feasible for most companies to build effective moderation solutions internally, and we therefore believe we have a responsibility to help any and all companies keep their communities safe from harmful content.

What is Hive’s role in content moderation relative to Parler (or Hive’s other moderation clients)?

Hive provides automated content moderation across video, image, text, and audio, spanning more than 40 classes (i.e., granular definitions of potentially harmful content classifications such as male nudity, gun in hand, or illegal injectables).

Our standard API provides a confidence score for every content submission against all our 40+ model classes. In the instance of Parler, model flagged instances of hate speech or incitement in text are additionally reviewed by members of Hive’s 2.5 million plus distributed workforce (additional details below).

Our clients map our responses to their individual content policies – both in terms of what categories they look to identify, how sensitive content is treated (i.e., blocked or filtered), and the tradeoff between recall (i.e., the percentage of total instances identified by our model) and precision (i.e., the corresponding percentage of identifications where our model is accurate). Hive partners with clients during onboarding as well as on an ongoing basis to provide guidance on setting class-specific thresholds based on client objectives and the desired tradeoffs between recall and precision.

It is the responsibility of companies like Apple to then determine whether the way our clients choose to implement our technology is sufficient to be distributed in their app stores, which in the case of Parler, Apple now has.

What percentage of content is moderated, and how fast?

100% of posts on Parler are processed through Hive’s models at the point of upload, with latency of automated responses in under 1 second.

Parler uses Hive’s visual moderation model to identify nudity, violence, and gore. Any harmful content identified is immediately placed behind a sensitive content filter at the point of upload (notifying users of sensitive content before they view).

Parler also uses Hive’s text moderation model to identify hate speech and incitement. Any potentially harmful content is routed for manual review. Posts deemed safe by Hive’s models are immediately posted to the site, whereas flagged posts are not displayed until model results are validated by a consensus of human workers. It typically takes 1-3 minutes for a flagged post to be validated. Posts containing incitement are blocked from appearing on the platform; posts containing hate speech are placed behind a sensitive content filter. Human review is completed using thousands of workers within Hive’s distributed workforce of more than 2.5 million registered contributors who have opted into and are specifically trained on and qualified to complete the Parler jobs.

In addition to the automated workflow, any user-reported content is automatically routed to Hive’s distributed workforce for additional review and Parler independently maintains a separate jury of internal moderators that handle appeals and other reviews.

This process is illustrated in the graphic below.

How effective is Hive’s moderation of content for Parler, and how does that compare to moderation solutions in place on other social networks?

We have run ongoing tests since launch to evaluate the effectiveness of our models specific to Parler’s content. While we believe that these benchmarks demonstrate best-in-class moderation, there will always be some level of false negatives. However, the models continue to learn from their mistakes, which will further improve the accuracy over time.

Within visual moderation, our tests suggest the incidence rate of adult nudity and sexual activity content not placed behind a sensitive content filter is less than 1 in 10,000 posts. In Facebook’s Q4 2020 Transparency Report (which, separately, we think is a great step forward for the industry and something all platforms should publish), it was reported that the prevalence of adult nudity and sexual activity content on Facebook was ~3 to 4 views per 10,000 views. These numbers can be seen as generally comparable with the assumption that views of posts with sensitive content roughly average the same as all other posts.

Within text moderation, our tests suggest the incidence rate of hate speech (defined as text hateful towards another person or group based on protected attributes, such as religion, nationality, race, sexual orientation, gender, etc.) not placed behind a sensitive content filter was roughly 2 of 10,000 posts. In Q4 2020, Facebook reported the prevalence of hate speech was 7 to 8 views per 10,000 views on their platform.

Our incidence rate of incitement (defined as text that incites or promotes acts of violence) not removed from the platform was roughly 1 in 10,000 posts. This category is not reported by Facebook for the purposes of benchmarking.

Does Hive’s solution prevent the spread of misinformation?

Hive’s scope of support to Parler does not currently support the identification of misinformation or manipulated media (i.e., deepfakes).

We hope the details above are helpful in further increasing understanding of how we work with social networking sites such as Parler and the role we play in keeping their environment (and others) safe from harmful content.

Learn more at https://thehive.ai/ and follow us on Linkedin

Press with additional questions? Please contact press@thehive.ai to request an interview or additional statements.

Note: All data specific to Parler above was shared with explicit permission from Parler.