To the untrained eye, distinguishing human-created art from AI-generated content can be difficult. Hive’s commitment to providing customers with API-accessible solutions for challenging problems led to the creation of our AI-Generated Image and Video Detection API, which classifies images as human-created or AI-generated. Our model was evaluated in an independent study conducted by Anna Yoo Jeong Ha and Josephine Passananti from the University of Chicago, which sought to determine who was more effective at classifying images as AI-generated: humans or automated detectors.
Ha and Passananti’s study addresses a growing problem within the generative AI space: As generative AI models become more advanced, the boundary between human-created art and AI-generated images has become increasingly indistinguishable. With such powerful tools being accessible to the general public, various legal and ethical concerns have been raised regarding the misuse of said technology.
Such concerns are pertinent to address because the misuse of generative AI models negatively impacts both society at large and the AI models themselves. Bad actors have used AI-generated images for harmful purposes, such as spreading misinformation, committing fraud, or scamming individuals and organizations. As only human-created art is eligible for copyright, businesses may attempt to bypass the law by passing off AI-generated images as human-created. Moreover, multiple studies (on both generative image and text models) have shown evidence that AI models will deteriorate if their training data solely consists of AI-generated content—which is where Hive’s classifier comes in handy.
The study’s results show that Hive’s model outperforms both its automated peers and highly-trained human experts in differentiating between human-created art versus AI-generated images across most scenarios. This post examines the study’s methodologies and findings, in addition to highlighting our model’s consistent performance across various inputs.
Structuring the Study
In the experiment, researchers evaluated the performance of five automated detectors (three of which are commercially available, including Hive’s model) and humans against a dataset containing both human-created and AI-generated images across various art styles. Humans were categorized into three subgroups: non-artists, professional artists, and expert artists. Expert artists are the only subgroup with prior experience in identifying AI-generated images.
The dataset consists of four different image groups: human-created art, AI-generated images, “hybrid images” which combine generative AI and human effort, and perturbed versions of human-created art. A perturbation is defined as a minor change to the model input aimed at detecting vulnerabilities in the model’s structure. Four perturbation methods are used in the study: JPEG compression, Gaussian noise, CLIP-based Adversarial Perturbation (which performs perturbations at the pixel level), and Glaze (a tool used to protect human artists from mimicry by introducing imperceptible perturbations on the artwork).
After evaluating the model on unperturbed imagery, the researchers proceeded to more advanced scenarios with perturbed imagery.
Evaluation Methods and Findings
The researchers evaluated the automated detectors on four metrics: overall accuracy (ratio of training data classified correctly to the entire dataset), false positive rate (ratio of human-created art misclassified as AI-generated), false negative rate (ratio of AI-generated images misclassified as human-created), and AI detection success rate (ratio of AI-generated images correctly classified as AI-generated to the total amount of AI-generated images).
Among automated detectors, Hive’s model emerged as the “clear winner” (Ha and Passananti 2024, 6). Not only does it boast a near-perfect 98.03% accuracy rate, but it also has a 0% false positive rate (i.e., it never misclassifies human art) and a low 3.17% false negative rate (i.e., it rarely misclassifies AI-generated images). According to the authors, this could be attributed to Hive’s rich collection of generative AI datasets, with high quantities of diverse training data compared to its competitors.
Additionally, Hive’s model proved to be resistant against most perturbation methods, but faced some challenges classifying AI-generated images processed with Glaze. However, it should be noted that Glaze’s primary purpose is as a protection tool for human artwork. Glazing AI-generated images is a non-traditional use case with minimal training data available as a result. Thus, Hive’s model’s performance with Glazed AI-generated images has little bearing on its overall quality.
Final Thoughts Moving Forward
When it comes to automated detectors and humans alike, Hive’s model is unparalleled. Even compared to human expert artists, Hive’s model classifies images with higher levels of confidence and accuracy.
While the study considers the model’s potential areas for improvement, it is important to note that the study was published in February 2024. In the months following the study’s publication, Hive’s model has vastly improved and continues to expand its capabilities, with 12+ model architectures added since.
If you’d like to learn more about Hive’s AI-Generated Image and Video Detection API, a demo of the service can be accessed here, with additional documentation provided here. However, don’t just trust us, test us: reach out to sales@thehive.ai or contact us here, and our team can share API keys and credentials for your new endpoints.
Hive's Innovative Integration with Thorn's Safer Match
Hive
We are excited to announce that Hive’s Partnership with Thorn is now live! Our current and prospective customers can now easily integrate Thorn’s Safer Match, a CSAM (child sexual abuse material) detection solution, using Hive’s APIs.
The Danger of CSAM
The threat of CSAM involves the production, distribution, and possession of explicit images and videos depicting minors. Every platform with an upload button or messaging capabilities is at risk of hosting child sexual abuse material (CSAM). In fact, in 2023 alone, there were over 104 million reports of potential CSAM reported to the National Center of Missing and Exploited Children.
The current state-of-the-art approach is to use an encrypting function to “hash” the content and then “match” it against a database aggregating 57+ million verified CSAM hashes. If the content hash matches against the database, then the content can be flagged as CSAM.
How the Integration Works
When presented with visual content, we first hash it, then match it against known instances of CSAM.
Hashing: We take the submitted image or video, and convert it into one or more hashes.
Deletion: We then immediately delete the submitted content ensuring nothing stays on Hive’s servers.
Matching: We match the hashes against the CSAM database and return whether the hashes match or not to you.
Hive’s partnership with Thorn allows our customers to easily incorporate Thorn’s Safer Match into their detection toolset. Safer Match provides programmatic identification of known CSAM with cryptographic and perceptual hash matching for images and for videos, through proprietary scene-sensitive video hashing (SSVH).
How you can use this API today:
First, talk to your Hive sales rep, and get an API key and credentials for your new endpoint.
Image
For an image, simply send the image to us, and we will hash it using MD5 and Safer encryption algorithms. Once the image is hashed, we return the results in our output JSON.
Video
You can also send videos into the API. We use MD5 hashes and Safer’s proprietary perceptual hashing for videos as well. However, they have different use cases. MD5 will return exact match videos and will only indicate whether the whole video is a known CSAM video.
Additionally, Safer will hash different scenes within the video and will flag those which are known to be violating. Safer scenes are demarcated by a start and end timestamp as shown in the response below.
Note: For the Safer SSVH, videos are sampled at 1FPS.
For more information, you can reference our documents.
Teaming Up For a Safer Internet
CSAM is one of the most pervasive and harmful issues on the internet today. Legal requirements make this problem even harder to tackle, and previous technical solutions required significant integration efforts. But, together with Thorn’s proactive technology, we can respond to this challenge and help make the internet a safer place for everyone.
Hive’s AutoML platform allows anyone the opportunity to create best-in-class machine learning solutions for the particular issues they face. Our platform can create classification and large language models for an endless range of use cases. If you need a model that bears no resemblance whatsoever to any pre-trained model we offer, no problem! We’ll help you build one yourself.
Hive AutoML uses the same technology behind our industry-leading ML tools to create yours. This way you get the best of both worlds — Hive’s impeccable model performance and a tool custom-built to address your needs.
Hive AutoML for Content Moderation
Today we’ll be focusing on one particular application of our AutoML platform: customizing our moderation models. These models kickstarted our success as a company and are used by many of the largest online platforms in the world. But the moderation guidelines of many sites differ from each other, and sometimes our base moderation models don’t quite fit them.
With AutoML, you can create your own version of our moderation models by fine-tuning our pre-existing heads or adding new heads entirely. We will then train a version of our high-performing base model with your added data to create a tool that best suits your platform’s moderation process.
In this blog post, we’ll walk through both how to add more data to an existing Hive moderation head and how to add a new custom moderation head. We’ll demonstrate the former while building a visual moderation model and the latter on a text moderation model. Audio moderation is not currently supported on AutoML.
Building a Visual Moderation Model
Hive AutoML for Visual Moderation allows you to customize our Visual Moderation base model to fit your specific needs. Using your own data, you can add new model heads or fine-tune any of the existing 45+ subclasses that we provide as part of our Visual Moderation tool. A full list of these classes is available here.
For this walkthrough, we’ll be fine-tuning the tobacco head. Our data will thus include images and labels for this head only. The resulting model will include all Hive visual moderation heads, with the tobacco head re-trained to incorporate this new data.
Uploading Your Dataset
Before you start building your model, you first need to upload any datasets you’ll use to the Dataset section of our AutoML platform. For Visual Moderation model training, we require a CSV file with a column for your image data (as publicly accessible image URLs) and an additional column for each head you wish to train.
For this tutorial, we’re going to train using additional data for the tobacco class. The below CSV includes image URLs and a column of labels for that head.
After you’ve selected your dataset file, you’ll be asked to confirm the column mapping. Make sure the columns of your dataset have been interpreted correctly and that you have the correct format (image or text) selected for each column.
Once you’ve confirmed your mapping, you can preview and edit your data. This page opens automatically after any dataset upload. You will be able to check whether all images were uploaded successfully, view the images themselves, and change their respective labels if desired. You can also add or delete any data that you wish to before you proceed onto model training.
Creating a Dataset Snapshot
When you’re happy with your dataset, you’ll then need to create a snapshot from it. A snapshot is a point-in-time export of a dataset that validates that dataset for training. Once a snapshot is created, its contents cannot be changed. This means that while you can continue to edit your original dataset, your snapshot will not change along with it — if you make any changes, you’ll need to create a new snapshot after you’re finished with your changes.
You can create a snapshot from any live dataset. To do so, simply click the “Create Snapshot” button on that dataset’s detail page. You’ll be prompted to provide some information, most notably which columns to use for image input and data labels. After your snapshot is successfully created, you’re ready to start training!
Creating a New Model
To create a training, you can select the “Create Model” button on the snapshot detail page. You’ll once again be asked to provide several pieces of information, including your model’s name, description, base model, and datasets. Make sure to select “Hive Vision Moderation” under the “Base Model” category as opposed to a general image classification model.
You can choose to upload a separate test dataset or split off a random section of your training dataset to use instead. If you choose to upload a separate test dataset, this dataset must contain the same heads and classes as your training dataset. After uploading your dataset, you will also need to create a snapshot of that dataset before you begin model training.
If you choose to split off a section of your training dataset, you will be able to choose the percentage of that dataset that you would like to use for testing as you create your training.
Before you begin your training, you are also able to edit some training preferences such as maximum number of training epochs, model selection rule, model selection label, early stopping, and invalid data criteria. If you’re unsure what any of these options are, there is a little information icon next to each that will explain what is meant by that setting.
After uploading your training (and, if desired, test) dataset and selecting your desired training options, you’re ready to create your model. After you begin training, your model will be ready within 20 minutes. You will automatically be directed to the model’s detail page, where you can watch its progress as it trains.
Playground and Metrics: Evaluating Your Model
When your model has completed its training, the model’s detail page will display a variety of metrics in order to help you analyze your model’s performance. At the top of the page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You can toggle whether these metrics are calculated by head overall or by each class within a head.
Below these numbers, you’ll also be able to view an interactive precision/recall (PR) curve. This is the gold-standard metric for a classification model and gives you more insight into how your model balances the inherent tradeoff between high precision and high recall.
You’ll then be shown a confusion matrix, which is an exact breakdown of the true positives, false positives, true negatives, and false negatives of the model’s results. This can highlight particular weak spots of your model and potential areas you may want to address with further training. As shown below, our example model has no false positives but several false negatives — images with tobacco that were classified as “no_tobacco.”
The final section of our metrics page is an area called the “playground.” The playground allows you to test your newly created AutoML model by submitting sample queries and viewing the responses. This feature is another great way to explore the way that your model responds to different kinds of prompts and the areas in which it could improve. You are given 500 free sample queries — beyond that you will be prompted to deploy your model with the cost of each submission charged to your organization’s billing account.
To test our tobacco model, we submitted the following sample image. To the right of it you can see the results for each Hive visual moderation class, including tobacco where it is classified correctly with a perfect confidence score or 1.00.
Deploying Your Model
To begin using your model, you can create a deployment from it. This will open the project on Hive Data, where you will be able to upload tasks, view tasks, and access your API key as you would with any other Hive Data project. An AutoML project can have multiple active deployments at one time.
Building a Text Moderation Model
Just like for Visual Moderation, our AutoML platform allows you to customize our Text Moderation base model to fit your particular use cases by adding or re-training model categories. The full class definitions for all 13 of our currently offered heads are available here. For this section of the walkthrough, we will be creating a new custom head in order to add capabilities to our model that we don’t currently offer: sentiment analysis.
Sentiment analysis is the task of categorizing the emotional tone of a piece of text, typically into two labels: positive or negative. Occasionally there may be a sentiment analysis task that breaks the sentiment down into more specific categories, such as joyful, angry, etc. Adding this kind of information to our existing Hive Text Moderation model could prove useful for platforms that wish to either exclude negative content on sites for children or to put limits on certain comment sections or forums where negative commentary is unwanted.
Sentiment analysis is a complex problem, since it is a language-based task. Understanding the meaning and tone of a sentence is not always easy even for humans. To keep it simple, we’ll just be using the two possible classifications of positive and negative.
Uploading Your Dataset
Similarly to creating a Visual Moderation model, you’ll need to upload your data as a CSV file to the “Data” section of our AutoML platform prior to model training. The format of our sentiment analysis dataset is shown below, though the column names do not need to be anything specific in order to be processed correctly.
After uploading your dataset, you’ll be asked to confirm the format of each column as either text, images, or JSONs. If you’d like to disregard that column entirely, that is also an option to “Ignore Column.” After you hit confirm, you can preview and edit your dataset just as you could with your image dataset in the Visual Moderation example. The preview page for text datasets is shown below.
Creating a Dataset Snapshot
As described in the Visual Moderation walkthrough, you’ll need to create a snapshot of your dataset in order to validate it prior to model training. When making your snapshot, make sure that you select “Text Classification” as your “Snapshot Type.” This will ensure that your snapshot is sufficient to train a Text Moderation model. You will also need to specify which column contains your text input and which contains the labels for that text input, as shown below for our dataset.
In the example above, we’ve selected our “text_data” column as our input and our “sentiment” column as our training labels.
Creating a New Model
After you’ve created your snapshot, you’ll automatically be brought to that snapshot’s detail page. From this page, starting a new model training is as easy — just hit the big “Create New Model” button on the top right. You’ll be asked to name your model and provide a few key details about the training, such as which snapshots you’d like to use as your data and how many times a training will cycle through that data.
Make sure you’ve selected “Text Classification” as your model type and “Hive Text Moderation” as your base model. Then you’re ready to start your training! Model training takes up to 20 minutes depending on several factors including the size of your dataset. Most take only several minutes to complete.
Metrics and Model Evaluation
Once your training has completed, you’ll be redirected to the details page for your new moderation model. On this page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You will also be able to view a precision/recall (P/R) curve and confusion matrix in order to further analyze the performance of your model.
The overall performance of the model is pretty good for a difficult task such as sentiment analysis. While there is room for improvement, this first round of training indicates that with some additional data we could likely bring all metrics above 90%. The confusion matrix for this model indicates that a specific area of weakness for this model is false negatives, to which a possible solution would be to increase the amount of positive examples in the data and observe if this improves model results.
We do not currently offer the playground feature for text moderation models, though we are working on this and expect it to be released in the coming months.
Deploying Your Model
The process for deploying your model is identical to the way we deployed our Visual Moderation model in the first example. To deploy any model, simply click “Create Deployment” from that model’s details page. Once deployed, you can access your unique API keys and begin to submit tasks to the model like any other Hive model.
Final Thoughts
We hope this in-depth walkthrough was helpful. If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.
Hive’s AutoML platform allows you to quickly train, evaluate, and deploy machine learning models for your own custom use cases. The process is simple — just select your desired model type, upload your datasets, and you’re ready to begin training!
Since we announced the initial release of our AutoML platform, we’ve added support for Large Language Model training. Now you can build everything from classification models to chatbots, all in the same intuitive platform. To illustrate how easy the model-building process is, we’ll walk through it step-by-step with each type of model. We’ll also provide a link to the publicly available dataset we used as an example so that you can follow along.
Training an Image Classification Model
First we’re going to create an Image Classification model. This type of model is used to identify certain subjects, settings, and other visual attributes in both images and videos. For this example, we’ll be using a snacks dataset to identify 20 different kinds of food (strawberries, apples, hot dogs, cupcakes, etc.). To follow along with this walkthrough, first download the images from this dataset, which are sorted into separate files for each label.
Formatting the Datasets
After downloading the image data, we’ll need to put this data in the correct format for our AutoML training. For Image Classification datasets, the platform requires a CSV file that contains one column for image URLs titled “image_url” and up to 20 other columns for the classification categories you wish to use. This requires creating publicly accessible links for each image in the dataset. For this example, all 20 of our food categories will be part of the same head — food type. To do this, we formatted our CSV as follows:
This particular dataset is within the size limitations for Image Classification datasets. When uploading your own dataset, it is crucial that you ensure it meets all of the sizing requirements and other specifications or the dataset upload will fail. These requirements can be found in our AutoML documentation.
Both test and validation datasets are provided as part of the snacks dataset. When using your own datasets, you can choose to upload a test dataset or to split off a random section of your training data to use instead. If you choose the latter, you will be able to select what percentage of that data you want you use as test data as you create your training.
Uploading the Datasets
Before we start building the model, we first need to upload both our training and test datasets to the “Datasets” section of our AutoML platform. This part of our platform validates each dataset before it can be used for training as well as stores all datasets to be easily accessed for future models. We’ll upload both the training and test datasets separately, naming them Snacks (Train) and Snacks (Test) respectively.
Creating a Training
To start building your model, we’ll head to our AutoML platform and select the “Create New Model” button. We’ll then be brought to a project setup page where we will be prompted to enter a project name and description. For Model Type, we’ll select “Image Classification.” On the right side of the screen, we can add our training dataset by selecting from our dataset library. We’ll select the datasets called Snacks (Train) and Snacks (Test) that we just uploaded.
And just like that, we’re ready to start training our model! To begin the training process, we’ll click the “Start Training Model” button. The model’s status will then shift to “Queued” and then “In Progress” while we train the model. This will likely take several minutes. When training is complete, the status will display as “Completed.”
Evaluating the Model
After model training is complete, the page for that project will show various performance metrics so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.
Below that, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that two images of cupcakes were incorrectly classified as cookies — an understandable mistake as the two are both decorated desserts.
These detailed metrics can help us to know what categories to target if we want to train a better version of the model. If you would like to retrain your model, you can also click the “Update Model” to begin the training process again.
Deploying the Model
Even after the first time training this model, we’re pretty happy with how it turned out. We’re ready to deploy the model and start using it. To deploy, select the project and click the “Create Deployment” button in the top right corner. The project’s status will shift to “Deploying.” The deployment may take a few minutes.
Submitting Tasks via API
After the deployment is complete, we’re ready to start submitting tasks via API as we would any pre-trained Hive model. We can click on the name of any individual deployment to open the project on Hive Data, where we can upload tasks, view tasks, and access our API key. There is also a button to “Undeploy” the project, if we wish to deactivate it at any point. Undeploying a model is not permanent — we can redeploy the project if we later choose to.
To see a video of the entire training and deployment process for an Image Classification model, head over to our Youtube channel.
Training a Text Classification Model
We’ll now walk through that same training process in order to build a Text Classification model, but with a few small differences. Text classification models can be used to sort and tag text content by topic, tone, and more. For this example, we’ll use the Twitter Sentiment Analysis dataset posted by user carblacac on Hugging Face. This dataset consists of a series of short text posts originally published to Twitter and whether they have a negative (0) or positive (1) overall sentiment. To follow along with this walkthrough, you can download the dataset here.
Formatting the Datasets
For Text Classification datasets, our AutoML platform requires a CSV with the text data in a column titled “text_data” and up to 20 other columns that each represent classification categories, also called model heads. Using the Twitter Sentiment Analysis dataset, we only need to rename the columns like so:
The data consists of two sets, a training set with 150k examples and a test set with 62k examples. Before we upload our dataset, however, we must ensure that it fits our Text Classification dataset requirements. In the case of the training set, it does not fit those requirements — our AutoML platform only accepts CSV files that have 100,000 rows or less and this one has 150,000. In order to use this dataset, we’ll have to remove some examples from the set. In order to keep the number of examples for each class relatively equal, we removed 25,000 negative (0) examples and 25,000 positive (1) ones.
Uploading the Datasets
After fixing the size issue, we’re ready to upload our datasets. As is the case with all model types, we must first upload any datasets we are going to use before we create our training.
Creating a Training
After both the training and test datasets have been validated, we’re ready to start building your model. On our AutoML platform, we’ll click the “Create New Model” button and enter a project name and description. For our model type, this time we’ll select “Text Classification.” Finally, we’ll add our training and test datasets that we just uploaded.
We’re then ready to start training! This aspect of the training process is identical to the one shown above for an Image Classification model. Just click the “Start Training Model” button on the bottom right corner of the screen. When training is complete, the status will display as “Completed.”
Evaluating the Model
Just like in our Image Classification example, the project page will show various performance metrics after training is complete so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.
Below the precision, recall, and balanced accuracy, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that while there were a fair amount of mistakes for each class, there were more cases in which a positive example was mistaken for a negative than the other way around.
While the results of this training are not as good as our Image Classification example, this is somewhat expected — sentiment analysis is a more complex and difficult classification task. While this model could definitely be improved by retraining with slightly different data, we’ll demonstrate how to deploy it. To retrain your model, however, all you need to do is click the “Update Model” button and begin the training process again.
Deploying the Model
Deploying your model is the exact same process as described above in the Image Classification example. After the deployment is complete, you’ll be able to view the deployment on Hive Data and access the API keys needed in order to begin using the model.
To see a video of the entire training and deployment process for a Text Classification model, head over to our Youtube channel.
Training a Large Language Model
Finally, we’ll walk through the training process for a Large Language Model (LLM). This process is slightly different from the training process for our classification model types, both in terms of dataset formatting and model evaluation. Our AutoML platform supports two different types of LLMs: Text and Chat. Text models are geared towards generating passages of writing or lines of code, whereas chat models are built for interactions with the user, often in the format of asking questions and receiving concise, factual answers. For this example, we’ll be using the Viggo dataset uploaded by GEM to Hugging Face. To follow along with us as we build the model, you can download the training and test sets here.
Formatting the Datasets
This dataset supports the task of summarizing and restructuring text into a very specific syntax. All data is within the video game domain, and all prompts take the form of either questions or statements about various games. The goal of the model is to take these prompts, extract the main idea behind them, and reformat them. For example, the prompt “Guitar Hero: Smash Hits launched in 2009 but plays like a game from 1989, it’s just not good” becomes “give_opinion(name[Guitar Hero: Smash Hits], release_year[2009], rating[poor]).”
First, we’ll check to make sure this dataset is valid per our guidelines for AutoML datasets. The size is well under the limit of 50,000 rows with only around 5,000. All that needs to be done to make sure that the formatting is correct is make sure that the prompt is in a column titled “prompt” and the expected completion is in another column titled “completion.” All other columns can be removed. From this dataset, we will use the column “target” as “prompt” and the column “meaning_representation” as “completion.” The final CSV is as shown below:
Uploading the Datasets
Now let’s upload our datasets. We’ll be using both the training and test datasets from the Viggo dataset as provided here. After both datasets have been validated, we’re ready to train the model.
Creating a Training
We’ll head back to our Models page and select “Create New Model”. This time, the project type should be “Language Generative – Text”. We will then choose our training and test datasets from a list of ones that we’ve already uploaded to the platform. Then we’ll start the training!
Evaluating the Model
For Large Language Models, the metrics page looks a little different than it does for our classification models.
The loss measures how closely the model’s response matches the response from the test data, where 0 represents a perfect prediction, and a higher loss signifies that the prediction is increasingly far from the actual response sequence. If the response has 10 tokens, we let the model predict each of the 10 tokens given all previous tokens are the same and display the final numerical loss value.
You can also evaluate your model by interacting with it in what we call the playground. Here you can submit prompts directly to your model and view its response, allowing model evaluation through experimentation. This will be available for 15 days after model training is complete, and has a limit of 500 requests. If either the time or request limit is reached, you can instead choose to deploy the model and continue to use the playground feature with unlimited uses which will be charged to the organization’s billing account.
For our Viggo model, all metrics are looking pretty good. We entered a few prompts into the playground to further test it, and the results showed no issues.
Deploying the Model
The process to deploy a Large Language Model is the same as it is for our classification models. Just click “Create Deployment” and you’ll be ready to submit API requests in just a few short minutes.
To see a video of the entire training and deployment process for an LLM, head over to our Youtube channel.
Final Thoughts
We hope this in-depth walkthrough of how to build different types of machine learning models with our AutoML platform was helpful. Keep an eye out for more AutoML tutorials in the coming weeks, such as a detailed guide to Retrieval Augmented Generation (RAG), data stream management systems (DSMS), and other exciting features we support.
If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.
Dataset Sources
All datasets that are linked to as examples in this post are publicly available for a wide range of uses, including commercial use. The snacks dataset and viggo dataset are both licensed under a Creative Commons Attribution Share-Alike 4.0 (CC BY-SA 4.0) license. They can be found on Hugging Face here and here. The Twitter Sentiment Analysis dataset is licensed under the Apache License, Version 2.0. It is available on Hugging Face here. None of these datasets may be used except in compliance with their respective license agreements.
We often refer to our models as “industry-leading” or “best-in-class,” but what does this actually mean in practice? How are we better than our competitors, and by how much? It is easy to throw these terms around, but we mean it — and we have the evidence to back it up. In this blog post, we’ll be walking through some of the benchmarks that we have run against similar products to show how our models outperform the competition.
Visual Moderation
First, let’s take a look at one of our oldest and most popular models: visual moderation. To compare our model to its major competitors, we ran a test set of NSFW, suggestive, and clean images through all models.
Visual moderation is a classification task — in other words, the model’s job is to classify each submitted image into one of several categories (in this case, NSFW or Clean). A popular and effective metric to measure performance in classification models is by looking at their precision and recall. Precision is the number of true positives (i.e., correctly identified NSFW images) over the number of predicted positives (images predicted to be NSFW). Recall is the number of true positives (correctly identified NSFW images) over the number of ground-truth positives (actual NSFW images).
There is a tradeoff between the two. If you predict all images to be NSFW, you will have perfect recall — you caught all the NSFW images! — but horrible precision because you incorrectly classified many clean images as NSFW. The goal is to have both high recall and high precision, no matter what confidence threshold is used.
With our visual moderation models, we’ve achieved this. We plotted the results of our test as a precision/recall curve, showing that even at high recall we maintain high precision and vice versa while our competitors fall behind us.
The above plot is for NSFW content detection. Our precision at 90% recall is nearly perfect at 99.6%, which makes our error rate a whopping 45 times lower than Public Cloud C. Even Public Clouds A and B, which are closer to us in performance, have error rates 12.5 times higher and 22.5 times higher than ours respectively.
We also benchmarked our model for suggestive content detection, or content that is inappropriate but not as explicit as our NSFW category. Hive’s error rate remains far below the other models, resting at 6 times lower than Public Cloud A and 12 times lower than Public Cloud C. Public Cloud B did not offer a similar category and thus could not be compared.
We only ran our test on NSFW/explicit imagery more broadly because our competitors do not have equivalent classes to ours for other visual moderation classes such as drugs, gore, and terrorism. This makes comparisons difficult, though it also in itself speaks to the fact that we offer far more classes than many of our competitors. With more than 90 subclasses, our visual moderation model far exceeds its peers in terms of the granularity of our results — we don’t just have classes for NSFW, but also for nudity, underwear, cleavage, and other smaller categories that offer our customers a more more in-depth understanding of their content.
Text Moderation
We used precision/recall curves to compare our text moderation model as well. For this comparison, we charted our performance across eight different classes. Hive outperforms all peer models on every single one.
Hive’s error rate on sexual content is 4 times lower than its closest competitor, Public Cloud B. Our other two competitors for that class both have error rates 6 times higher. The threat class boasts similar metrics, with Hive’s error rate between 2 and 4 times lower than all its peers.
Hive’s model for hateful content detection is on par with our competitors, remaining slightly ahead on all thresholds. Our model for bullying content does the same, with an error rate 2 times lower than all comparable models.
Hive is one of few companies to offer text moderation for drugs and weapons, and our error rates here are also worth noting — our only competitor has an error rate 4 and 8 times higher than ours for drugs and weapons respectively.
Hive also offers the child exploitation class, one that few others provide. With this class, we achieve an error rate 8 times lower than our only other major competitor.
Audio Moderation
For Audio Moderation, we evaluate our model using word error rate (WER), which is the gold-standard metric for a speech recognition system. Word error rate is the number of errors divided by the total number of words transcribed, and a perfect word error rate is 0. As you can see, we achieve the best or near-best performance across a variety of languages.
We excel across the board, with the lowest word error rate on the majority of the languages offered. On Spanish in particular, our word error rate is more than 4 times lower than Public Cloud B.
For German and Italian we are very close behind Public Cloud C and remain better than all other competitors.
Optical Character Recognition (OCR)
To benchmark our OCR model, we calculated the F-score for our model as well as several of our competitors. F-score is the harmonic mean of a model’s precision and recall, combining both of them into one measurement. A perfect F-score is 1. When comparing general F-scores, Hive excels as shown below.
We also achieve best-in-class or near-best performance when comparing by language, as shown in the graphs below. With some languages, we excel by quite a large margin. For Chinese and Korean in particular, Hive’s F-score is more than twice all of its competitors. We fall slightly behind in Hindi, yet still perform significantly better than Public Cloud A.
Demographics
We evaluated our age prediction model by calculating mean error, or how far off our age predictions were from the truth. Since the test dataset we used is labeled using age ranges and not individual numbers, mean error is defined as the distance in years from the closest end of the correct age range (i.e., guessing 22 for someone in the range 25-30 is an error of 3 years). A perfect mean error is 0.
As you can see from this distribution, Hive has a significantly lower mean error rate in the three lowest age buckets (0-2, 3-9, and 10-19). In the age range 0-2, our mean error rate is 11 times less than Public Cloud A’s. For the range 3-9 and 10-19, that difference becomes 5 times greater and 3 times greater respectively — still quite a large margin. Hive also excels notably at the oldest age bucket (70+), where our mean error rate is nearly 7 times less than Public Cloud A’s.
For a broader analysis, we compared our overall mean error across all age buckets, as well as the accuracy of our gender predictions.
AutoML
One of the newest additions to our product suite, our AutoML platform allows you to train image classification, text classification, and fine-tune large language models with your own custom datasets. To evaluate the effectiveness of this tool, we used the same test set to train models both on our platform and on competitor’s platforms and measured the performance of the resulting model.
For image classification, we used three different classification tasks to account for the fact that different tasks have different levels of inherent difficulty and thus may yield higher or lower performing models. We also used three different dataset sizes for each classification task in order to measure how well the AutoML platform is able to work with limited amounts of examples.
We compared the resulting models using balanced accuracy, which is the arithmetic mean of a model’s true positive rate and true negative rate. A perfect balanced accuracy is 100%.
As shown in the above tables, Hive achieves best or near-best accuracy across all sets. Our results are quite similar to Public Cloud B’s, pulling ahead on the product dataset. We fell to near-best performance on the smoking dataset, which is the most difficult of the three classification tasks. Even then, we remained within a few percentage points of the winner, Public Cloud B.
For text classification, we trained models for three different categories: sexual content, drugs, and bullying. The results are in the table below. Hive outperforms all competitors on all three categories using all dataset sizes.
Another important consideration when it comes to AutoML is training time. An AutoML tool could build accurate models, but if it takes an entire day to do so it still may not be a great solution. We compared the time it took to train Hive’s text classification tool for the drugs category, and found that our platform was able to train the model 10 times as fast as Private Company A and 32 times as fast as Public Cloud B. And for the smallest dataset size of 100 examples, we trained the model 18 times faster than Private Company A and 268 times faster than Public Cloud B. That’s a pretty significant speedup.
Measuring the performance of fine-tuned LLMs on our foundation model is a bit more complicated. Here we evaluate two different tasks: question answering and closed-domain classification.
To measure performance on the question answering task, we used a metric called token accuracy. Token accuracy indicates how many tokens are the same between the model’s response and the expected response from the test set. A perfect token accuracy is 100%. As shown below, our token accuracy is higher than our competitors or around the same for all dataset sizes.
This is also true for the classification task, where maintained roughly the same performance as Public Cloud A across the various dataset sizes. Below are the full results of our comparison.
Final Thoughts
As illustrated throughout this in-depth look into the performance of our models, we truly earn the title “best-in-class.” We conduct these benchmarks not just to justify that title, but more so as part of our constant effort to make our models the best that they can be. Reviewing these analyses helps us to identify our strengths, yes, but also our weaknesses and where we can improve.
If you have any questions about any of the benchmarks we’ve discussed here or any other questions about our models, please don’t hesitate to reach out to us at sales@thehive.ai.
Hive was thrilled to have our CTO Dmitriy present at the Workshop on Multimodal Content Moderation during CVPR last week, where we provided an overview of a few important considerations when building machine learning models for classification tasks. What are the effects of data quantity and quality on model performance? Can we use synthetic data in the absence of real data? And after model training is done, how do we spot and address bias in the model’s performance?
Read on to learn some of the research that has made our models truly best-in-class.
The Importance of Quality Data
Data is, of course, a crucial component in machine learning. Without data, models would have no examples to learn from. It is widely accepted in the field that the more data you train a machine learning model with, the better. Similarly, the cleaner that data is, the better. This is fairly intuitive — the basic principle is true for human learners, too. The more examples to learn from, the easier it is to learn. And if those examples aren’t very good? Learning becomes more difficult.
But how important is good, clean data to building a good machine learning model? Good data is not always easy to come by. Is it better to use more data at the expense of having more noise?
To investigate this, we trained a binary image classifier to detect NSFW content, varying the amount of data between 10 images and 100k images. We also varied the noise by flipping a percentage of the labels on between 0% and 50% of the data. We then plotted the balanced accuracy of the resulting models using the same test set.
The result? It turns out that data quality is more important than we may think. It was clear that, as expected, accuracy was the best when the data was both as large as possible (100k examples) and as clean as possible (0% noise). From there, however, the table gets more interesting.
As seen above, the model trained with only 10k data and no noise performs better than the model trained with ten times as much data at 100k and 10% noise. The general trend appears to be similar — clean data matters very much, and it can quickly tank performance even when using the maximum amount of data. In other words, less data is sometimes preferable to more data if it is cleaner.
We wondered how this would change with a more detailed classification problem, so we built a new binary image classifier. This time, we trained the model to detect images of smoking, which is detecting signal from a small part of an image.
The outcome, shown below, echoes the results from the NSFW model — clean data has a great impact on performance even with a very large dataset. But the quantity of data appears to be more important than it was in the NSFW model. While 5000 examples with no noise got around 90% balanced accuracy for the NSFW model, that same amount of noiseless data only got around 77% for the smoking classifier. The increase in performance, while still strongly tied to data quantity, was noticeably slower and only the largest datasets produced well-performing models.
It makes sense that quantity of data would be more important with a more difficult classification task. Data noise also remained a crucial factor for the models trained with more data — the 50k model with 10% noise performed about the same as the 100k model with 10% noise, illustrating once more that more data is not always better if it is still noisy.
Our general takeaways here are that while both data quality and quantity matter quite a bit, clean data is more important beyond a certain quantity threshold. This threshold is where performance increases begin to plateau as the data grows larger, yet noisy data continues to have significant effects on model quality. And as we saw by comparing the NSFW model and the smoking one, this quality threshold also changes depending on the difficulty of the classification task itself.
Training on Synthetic Data: Does it Help or Hurt?
So having lots of clean data is important, what can be done when good data is hard to find or costly to acquire? With the rise of AI image generation over the past few years, more and more companies have been experimenting with generated images to supplement visual datasets. Can this kind of synthetic data be used to train visual classification models that will eventually classify real data?
In order to try this out, we trained five different binary classification models to detect smoking. Three of the models were trained exclusively with real data (10k, 20k, and 40k examples respectively), one was a mix of real and synthetic images (10k real and 30k synthetic), and one was trained entirely on synthetic data (40k). Each datatest had an even split of 50% smoking and 50% nonsmoking examples. To evaluate the models, we used two balanced test sets: one with 4k real images and one with 4k of synthetic images. All synetic images were created using Stable Diffusion.
Looking at the precision and recall curves for the various models, we made an interesting discovery. Unsurprisingly, the largest of the entirely real datasets performed the best (40k). The one trained on 10k real images and 30k synthetic images performed significantly better than the one trained only on 10k real images.
These results suggest that while large amounts of real data are best, a mixture of synthetic and real data could in fact boost model performance when little data is available.
Keeping an Eye Out For Bias
After model training is finished, extensive testing must be done in order to make sure there aren’t any biases in the model results. Biases can come in the form of biases that exist in the real world and are thus often ingrained in real-world data, such as racial bias or gender bias, but can also come in the form of biases that occur in the data by coincidence.
A great example of how unpredictable certain biases can be came recently during a model training for NSFW detection, where the model started flagging many pictures of computer keyboards as false positives. Upon closer investigation, this occurred because many of the NSFW pictures in our training data were photos of computers whose screens were displaying explicit content. Since the computer screens were the focus of these images, keyboards were also often included, leading to the false association that keyboards are an indicator of NSFW imagery.
Three images that were falsely categorized as NSFW
In order to correct this bias, we added more non-NSFW keyboard examples to the training data. Improving this bias in this way not only helps the model by addressing the bias itself, but also boosts general model performance. Of course, addressing bias is even more critical when dealing with data that carries current or historical biases against minority groups, thereby perpetuating them by ingraining them into future technology. The importance of detecting and correcting these biases cannot be overstated, since leaving them unaddressed carries a significant amount of risk beyond simply calling a keyboard NSFW.
Regardless of the type of bias, it’s important to note that biases aren’t always readily apparent. The original model prior to addressing the bias had a balanced accuracy of 80%, which is high enough that the bias may not have been immediately noticeable since errors weren’t extremely frequent. The takeaway here is thus not just that bias correction matters, but that looking into potential biases is necessary even when you might not think they’re there.
Takeaways
Visual classification models are in many ways the heart of Hive — they were our main launching point into the space of content moderation and AI-powered APIs more broadly. We’re continuously searching for ways to keep improving these models as the research surrounding them grows and evolves. Conclusions like those discussed here — the importance of clean data, particularly when you have lots of it, the possible use of synthetic data when real data is lacking, and the need to find and correct all biases (don’t forget about the unexpected ones!) — greatly inform the way we build and maintain our products.
We’re excited to announce Hive’s new AutoML tool that provides customers with everything they need to train, evaluate, and deploy customized machine learning models.
Our pre-trained models solve a wide range of use cases, but we will always be bounded by the number of models we can build. Now customers who find that their unique needs and moderation guidelines don’t quite match with any of our existing solutions can create their own, custom-built for their platform and easily accessible via API.
AutoML can be used to augment our current offerings or to create new models entirely. Want to flag a particular subject that doesn’t exist as a head in our Text Moderation API, or a certain symbol or action that isn’t part of our Visual Moderation? With AutoML, you can quickly build solutions for these problems that are already integrated with your Hive workflow.
Let’s walk through our AutoML process to illustrate how it works. In this example, we’ll build a text classification model that can determine whether or not a given news headline is satirical.
First, we need to get our data in the proper format. For text classification models, all dataset files must be in CSV format. One column should contain the text data (titled text_data) and all other columns represent model heads (classification categories). The values within each row of any given column represent the classes (possible classifications) within that head. An example of this formatting for our satire model is shown below:
The first page you’ll see on Hive’s AutoML platform is a dashboard with all of your organization’s training projects. In the image below, you’ll see how the training and deployment status of old projects are displayed. To create our satire classifier, we’re going to make a new project by hitting the “Create New Project” button in the top right corner.
We’ll then be prompted to provide a name and description for the project as well as training data in the form of a CSV file. For test data, you can either upload a separate CSV file or choose to randomly split your training data into two files, one to be used for training and the other for testing. If you decide to split your data, you will be able to choose the percentage that you would like to split off.
After all of that is entered, we are ready to train! Beginning model training is as easy as hitting a single button. While your model trains, you can easily view its training status on the Training Projects page.
Once training is completed, your project page will show an analysis of the model’s performance. The boxes at the top allow you to decide if you want to look at this analysis for a particular class or overall. If you’re building a multi-headed model, you can choose which head you’d like to evaluate as well. We provide precision, recall, and balanced accuracy for all confidence thresholds as well as a PR curve. We also display a confusion matrix to show how many predictions were correct and incorrect per class.
Once you’re satisfied with your model’s performance, select the “Create Deployment” to launch the model. Similarly to model training, deployment will take a few moments. After model deployment is complete, you can view the deployment in your Hive customer dashboard, where you can access your API key, view current tasks, as well as access other information as you would with our pre-trained models.
We’re very excited to be adding AutoML to our offerings. The platform currently supports both text and image classification, and we’re working to add support for large language models next. If you’d like to learn more about our AutoML platform and other solutions we’re building, please feel free to reach out to sales@thehive.ai or contact us here.
Hive is excited to announce our new classifier to differentiate between AI-generated and human-written text. This model is hosted on our website as a free demo, and we encourage users to test out its performance.
The recent release of OpenAI’s ChatGPT model has raised questions about how public access to these kinds of large language models will impact the field of education. Certain school districts have already banned access to ChatGPT, and teachers have been adjusting their teaching methods to account for the fact that generative AI has made academic dishonesty a whole lot easier. Since the rise of internet plagiarism, plagiarism detectors have become commonplace at academic institutions. Now a need arises for a new kind of detection: AI-generated text.
Our AI-Generated Text Detector outperforms key competitors, including OpenAI itself. We compared our model to their detector, as well as two other popular AI-generated text detection tools: GPTZero and Writer’s AI Content Detector. Our model was the clear frontrunner, not just in terms of balanced accuracy but also in terms of false positive rate — a critical factor when these tools are deployed in an educational setting.
Our test dataset consisted of 242 text passages, including ChatGPT-generated text as well as human-written text. To ensure that our model behaves correctly on all genres of content, we included everything from casual writing to more technical and academic writing. We took special care to include texts written by those learning English as a second language, so as to be careful that their writing is not incorrectly categorized by our model due to differences in tone or wording. For these test examples, our balanced accuracy stands at an impressive 99% while the closest competitor is GPTZero with 83%. OpenAI got the lowest of the bunch, with only 73%.
Others have tried our model against OpenAI’s in particular, and they have echoed our findings. Following OpenAI’s classifier release, Mark Hachman at PCWorld published an article that suggested that those disappointed with OpenAI’s model should turn to Hive’s instead. In his own informal testing of our model, he praised our results for their accuracy as well as our inclusion of clear confidence scores for every result.
A large fear about using these sorts of detector tools in an educational setting is the potentially catastrophic impact of false positives, or cases in which human-written writing is classified as AI-generated. While building our model, we were mindful of the fact that the risk of such high-cost false positives is one that many educators may not want to take. In response, we prioritized lowering our false positive rate. On the test set above, our false positive rate is incredibly low, at 1%. This is compared to OpenAI’s at 12.5%, Writer’s at 46%, and GPTZeros at 30%.
Even with our low false positive rate, we do encourage that this tool be used as part of a broader process when investigating academic dishonesty and not as the sole decision maker. Just like plagiarism checkers, it is created to be a helpful screening tool and not a final judge. We are continuously working to improve our model, and any feedback is greatly appreciated. Large language models like ChatGPT are here to stay, and it is crucial to provide educators with tools they can use as they decide how to navigate these changes in their classrooms.
When generative AI models first gained popularity in the late 2010s, they brought with them the ability to create deepfakes. Deepfakes are synthetic media, typically video, in which one person’s likeness is replaced by another’s using deep learning. They are powerful tools for fraud and misinformation, allowing for the creation of synthetic videos of political leaders and letting scammers easily take on new identities.
The primary use, though, of deepfake technology is the fabrication of nonconsensual pornography. The term “deepfake” itself was coined in 2017 by a Reddit user of the same name who made fake pornographic videos featuring popular female celebrities. In 2019, the company Sensity AI catalogued deepfakes across the web and reported that a whopping 96% of them were pornographic, all of which were of women. In the years since, more of this sort of deepfake pornography has become readily available online, with countless forums and even entire porn sites dedicated to it. The targets of this are not just celebrities. They are also everyday women superimposed into adult content by request—on-demand revenge porn for anyone with an internet connection.
Many sites have banned deepfakes entirely, since they are far more often used for harm than for good. At Hive, we’re committed to providing API-accessible solutions for challenging moderation problems like this one. We’ve built our new Deepfake Detection API to empower enterprise customers to easily identify and moderate deepfake content hosted on their platforms.
This blog post explains how our model identifies deepfakes and the new API that makes this functionality accessible.
A Look Into Our Model
Hive’s Deepfake Detection model is essentially a version of our Demographic API that is optimized to identify deepfakes as opposed to demographic attributes. When a query is submitted, this visual detection model locates any faces present in the input. It then performs an additional classification step that determines whether or not each detected face is a deepfake. In its response, it provides a bounding-box location and classification (with confidence scores) for each face.
While the face detection aspect of this process is the same as the one used for our industry-leading Demographic API, the classification step was fine-tuned for deepfake identification by training on a vast repository of synthetic and real video data. Many of these examples were pulled from genres commonly associated with deepfakes, such as pornography, celebrity interviews, and movie clips. We also included other types of examples in order to create a classifier that identifies deepfakes across many different content genres.
Putting It All Together: Example Input and Response
With only one head, the response of our Deepfake Detection model is easily interpretable. When an image or video query is submitted, it is first split into frames. Each frame is then analyzed by our visual detection model in order to find any faces present in the image. Every face then receives a deepfake classification — either yes_deepfake or no_deepfake. Confidence scores for these classifications range from 0.0 to 1.0, with a higher score indicating higher confidence in the model’s results.
Example Deepfake Detection input and API response
Here we see the deepfaked image and, to its left, the two original images used to create it. This input image doesn’t appear to be fake at first glance, especially when the image is displayed at a small size. Even with a close examination, a human reviewer could fail to realize that it is actually a deepfake. As the example illustrates, the model correctly identifies this realistic deepfake with a high confidence score of more than 0.99. Since there is only one face present in this image, we see one corresponding “bounding poly” in the response. This “bounding poly” contains all model response information for that face. Vertices and dimensions are also provided, though those fields are truncated here for clarity.
Because deepfakes like this one can be very convincing, they are difficult to moderate with manual flagging alone. Automating this task is not only ideal to accelerate moderation processes, but also to spot realistic deepfakes that human reviewers might miss.
Digital platforms, particularly those that host NSFW media, can integrate this Deepfake Detection API into their workflows by automatically screening all content as it is posted. Video communication platforms and applications that use any kind of visual identity verification can also utilize our model to counter deepfake fraud.
Final Thoughts
Hive’s Deepfake Detection API joins our recently released AI-Generated Media Recognition API in the aim to expand content-moderation to keep up with the fast-growing domain of generative AI. Moving forward, we plan to continually update both models so as to best keep up with new generative techniques, popular content genres, and emerging customer needs.
The recent popularity of diffusion models like Stable Diffusion, Midjourney, and DALL-E 2 has brought deepfakes back into the spotlight and sparked conversation on whether these newer generative techniques can be used to develop brand-new ways of making them. Whether or not this happens, deepfakes aren’t going away any time soon and are only growing in number, popularity, and quality. Identifying and removing them across online platforms is crucial to limit the fraud, misinformation, and digital sexual abuse that they enable.
If you’d like to learn more about our Deepfake Detection API and other solutions we’re building, please feel free to reach out to sales@thehive.ai or contact us here.
In the past few months, AI-generated art has experienced rapid growth in both popularity and accessibility. Engines like DALL-E, Midjourney, and Stable Diffusion have spurred an influx of AI-generated artworks across online platforms, prompting an intense debate around their legality, artistic value, and potential for enabling the propagation of deepfake-like content. As a result, certain digital platforms such as Getty Images, InkBlot Art, Fur Affinity, and Newgrounds have announced bans on AI-generated content entirely, with more to likely follow in the coming weeks and months.
Platforms are enacting these bans for a variety of reasons. Online communities built for artists to share their artwork such as Newgrounds, Fur Affinity, and Purpleport stated they put their AI artwork ban in place in order to keep their sites focused exclusively on human-created art. Other platforms have taken action against AI-generated artwork due to copyright concerns. Image synthesis models often include copyrighted images in their training data, which consist of massive amounts of photos and artwork scraped from across the web, typically without any artists’ consent. It is an open question whether this type of scraping and the resulting AI-generated artwork amount to copyright violations — particularly in the case of commercial use — and platforms like Getty and InkBlot Art don’t want to take that risk.
As part of Hive’s commitment to providing enterprise customers with API-accessible solutions to moderation problems, we have created a classification model made specifically to assist digital platforms in enacting these bans. Our AI-Generated Media Recognition API is built with the same type of robust classification model as our industry-leading visual moderation products, and it enables enterprise customers to moderate AI-generated artwork without relying on users to flag images manually.
This post explains how our model works and the new API that makes this functionality accessible.
Using AI to Identify AI: Building Our Classifier
Hive’s AI-Generated Media Recognition model is optimized for use with the kind of media generated by popular AI generative engines such as DALL-E, Midjourney, and Stable Diffusion. It was trained on a large dataset comprising millions of artificially generated images and human-created images such as photographs, digital and traditional art, and memes sourced from across the web.
The resulting model is able to identify AI-created images among many different types and styles of artwork, even correctly identifying AI artwork that could be misidentified by manual flagging. Our model returns not only whether or not a given image is AI-generated, but also the likely source engine it was generated from. Each classification is accompanied by a confidence score that ranges from 0.0 to 1.0, allowing customers to set a confidence threshold to guide their moderation.
How it Works: An Example Input and Response
When receiving an input image, our AI-Generated Media Recognition model returns classifications under two separate heads. The first provides a binary classification as to whether or not the image is AI-generated. The second, which is only relevant when the image is classified as an AI-made image, identifies the source of that artificial image from among the most popular generation engines that are currently in use.
To get a sense of the capabilities of our AI-Generated Media Recognition model, here’s a look at an example classification:
This input image was created with the AI model Midjourney, though it is so realistic that it may be missed by manual flagging. As shown in the response above, our model correctly classifies this image as AI-generated with a high confidence score of 0.968. The model also correctly identifies the source of the image, with a similarly high confidence score. Other sources like DALL-E are also returned along with their respective confidence scores, and the scores under each of the two model heads sum to 1.
Platforms that host artwork of any kind can integrate this AI-Generated Media Recognition API into their workflows by automatically screening all content as it is being posted. This method of moderating AI artwork works far more quickly than manual flagging and can catch realistic artificial artworks that even human reviewers might miss.
Final Thoughts and Future Directions
Digital platforms are now being flooded with AI-generated content, and that influx will only increase as these generative models continue to grow and spread. On top of this, creating this kind of artwork is fast and easy to access online, which enables large quantities of it to be produced quickly. Moderating artificially created artworks is crucial for many sites to maintain their platform’s mission and protect themselves and their customers from potential legal issues further down the line.
We created our AI-Generated Media Recognition API to solve this problem, but our model will need to continue to evolve along with image generation models as existing ones improve and new ones are released. We plan on adding new generative engines to our sources as well as continually updating our model to keep up with the current capabilities of these models. Since some newer generative models can create video in addition to still images, we are working to add support for video formats within our API in order to best prevent all types of AI-generated artwork from dominating online communities where they are unwelcome.
If you’d like to learn more about this and other solutions we’re building, please feel free to reach out to sales@thehive.ai or contact us here.