Hive is excited to announce that we have been awarded a Department of Defense (DoD) contract for deepfake detection of video, image, and audio content. This groundbreaking partnership marks a significant milestone in protecting our national security from the risks of synthetic media and AI-generated disinformation.
Combating Synthetic Media and Disinformation
Rapid strides in technology have made AI manipulation the weapon of choice for numerous adversarial entities. For the Department of Defense, a digital safeguard is necessary in order to protect the integrity of vital information systems and stay vigilant against the future spread of misinformation, threats, and conflicts at a national scale.
Hive’s reputation as frontline defenders against AI-generated deception makes us uniquely equipped to handle such threats. Not only do we understand the stakes at hand, we have been and continue to be committed to delivering unmatched detection tools that can mitigate these risks with accuracy and speed.
Under our initial two-year contract, Hive will partner with the Defense Innovation Unit (DIU) to support the intelligence community with our state-of-the-art deepfake detection models, deployed in an offline, on-premise environment and capable of detecting AI-generated video, image, and audio content. We are honored to join forces with the Department of Defense in this critical mission.
Our Cutting-Edge Tools
To best empower the U.S. defense forces against potential threats, we have provided five proprietary models that can detect whether an input is AI-generated or a deepfake.
If an input is flagged as AI-generated, it was likely created using a generative AI engine. Whereas, a deepfake is a real image or video where one or more of the faces in the original image has been swapped with another person’s face.
The models we’ve provided are, as follows:
AI-Generated Detection (Image and Video), which detects if an image or video is AI-generated.
AI-Generated Detection (Audio), which detects if an audio clip is AI-generated.
Deepfake Detection (Image), which detects if an image contains one or more faces that are deepfaked.
Deepfake Detection (Video), which detects if a video contains one or more faces that are deepfaked.
Liveness (Image and Video), which detects whether a face in an image or video is primary (exists in the primary image) or secondary (exists in an image, screen, or painting inside of the primary image).
Forging a Path Forward
Even as new threats continue to emerge and escalate, Hive continues to be steadfast in our commitment to provide the world’s most capable AI models for validating the safety and authenticity of digital content.
For more details, you can find our recent press release here and the DIU’s press release here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
Hive is excited to announce that we are releasing a new API: Text Moderation Explanations! This API helps customers understand why our Text Moderation model assigns text strings particular scores.
The Need For Explainability
Hive’s Text Moderation API scans a text-string or message, interprets it, and returns to our users a score from 0-3 mapping to a severity level across a number of top level classes and dozens of languages. Today, hundreds of customers send billions of text strings each month through this API to protect their online communities.
A top feature request has been explanations for why our model assigns the scores it does, especially for foreign languages. While some moderation scores may be clear, there also may be ambiguity around edge cases for why a string was scored the way it was.
This is where our new Text Moderation Explanations API comes in—delivering additional context and visibility into moderation results in a scalable way. With Text Moderation Explanations, human moderators can quickly interpret results and utilize the additional information to take appropriate action.
A Supplement to Our Text Moderation Model
Our Text Moderation classes are ordered by severity, ranging from level 3 (most severe) to level 0 (benign). These classes correspond to the possible scores Text Moderation can give a text string. For example: If a text string falls under the “sexual” head and contains sexually explicit language, it would be given a score of 3.
The Text Moderation Explanations API takes in three inputs: a text string, its class label (either “sexual”, “bullying”, “hate”, or “violence”), and the score it was assigned (either 3, 2, 1, or 0). The output is a text string that explains why the original input text was given that score relative to its class. It should be noted that Explanations is only supported for select multilevel heads (corresponding to the class labels listed previously).
To develop the Explanations model, we used a supervised fine-tuning process. We used labeled data—which we internally labeled at Hive using native speakers—to fine-tune the original model for this specialized process. This process allows us to support a number of languages apart from English.
Comprehensive Language Support
We have built our Text Moderation Explanation API with broad initial language support. Language support solves the crucial issue of understanding why a text string (in one’s non-native language) was scored a certain way.
We currently support eight different languages for Text Moderation Explanations and four top level classes:
Text Moderation Explanations are now included at no additional cost as part of our Moderation Dashboard product, as shown below:
Additionally, customers can also access the Text Moderation Explanations model through an API (refer to the documentation).
In future releases, we anticipate adding further language and top level class support. If you’re interested in learning more or gaining test access to the Text Moderation Explanations model, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
We are excited to announce that Hive is now offering Thorn’s predictive technology through our CSAM detection API! This API now enables customers to identify novel cases of child sexual abuse material (CSAM) in addition to detecting known CSAM using hash-based matching.
Our Commitment to Child Internet Safety
At Hive, making the internet safer is core to our mission. While our content moderation tools help reduce human exposure to harmful content across many categories, addressing CSAM requires specialized expertise and technology.
That’s why we’re expanding our existing partnership with Thorn, an innovative nonprofit that builds technology to defend children from sexual abuse and exploitation in the digital age.
Until now, our integration with Thorn focused on hash-matching technology to detect known CSAM. The new CSAM detection API builds on this foundation by adding advanced machine learning capabilities that can identify previously unidentified CSAM.
By combining Thorn’s industry-leading CSAM detection technology with Hive’s comprehensive content moderation suite, we provide platforms with robust protection against both known and newly created CSAM.
How the Classifier Works
The classifier works by first generating embeddings of the uploaded media. An embedding is a list of computer-generated scores between 0 and 1. After generating the embeddings, Hive permanently deletes all of the original media. We then use the classifier to determine whether the content is CSAM based on the embeddings. This process ensures that we do not retain any CSAM on our servers.
The classifier returns a score between 0 and 1 that predicts whether a video or image is CSAM. The response object will have the same general structure for both image and video inputs. Please note that Hive will return both results together: probability scores from the classifier and any match results from hash matching against the aggregated hash database.
For a detailed guide on how to use Hive’s CSAM detection API, refer to the documentation.
Building a Safer Internet
Protecting platforms from CSAM demands scalable solutions. The problem is complex; but our integration with Thorn’s advanced technology provides an efficient way to detect and stop CSAM, helping to safeguard children and build a safer internet for all.
If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
We are excited to announce that we are making select proprietary Hive models and popular open-source generative models directly accessible for customers to deploy and integrate into their workflows.
Starting today, customers can now create projects by themselves with just a few clicks.
Hive Proprietary Models
We have made select proprietary Hive models accessible to customers across our Understand and Search model categories, ranging from our Celebrity Recognition API to our Speech-to-Text model. For a full list of generally available models, see our pricing page here.
Additional Model Offerings
We currently offer a variety of open-source image generation models and large language models (LLMs) that customers can directly access themselves.
For image generation models, we have four different options available today, with additional models being served in the coming weeks: SDXL (Stable Diffusion XL), SDXL Enhanced, Flux Schnell, and Flux Schnell Enhanced. SDXL Enhanced and Flux Schnell Enhanced are Hive’s enhanced versions of the aforementioned base models, served exclusively to our customers. The differences are outlined in the table below.
SDXL (Stable Diffusion XL)
Latent diffusion text-to-image generation model produced by Stability AI. Trained on a larger dataset than the base model, with a larger UNet enabling better generation.
SDXL Enhanced
Hive’s enhanced version of SDXL, served exclusively to our customers. Tailored toward a photorealistic and refined art style with extreme detail.
Flux Schnell
Flux’s fastest model in their suite of text-to-image models, capable of generating images in 4 or fewer steps. Best suited for local development and personal use.
Flux Schnell Enhanced
Hive’s enhanced version of Flux Schnell that is trained on our proprietary data and retains the base model’s speed and efficiency, served exclusively to our customers. Generates images across a wide range of artistic styles with a specialization in photorealism, leading to high levels of customer satisfaction based on past user studies.
For LLMs, we have a selection of Meta’s Llama models from their Llama 3.1 and 3.2 series available now. The differences are outlined in the table below.
Llama 3.1 8B Instruct
Llama 3.1 8B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (70B).
Llama 3.1 70B Instruct
Llama 3.1 70B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (8B).
Llama 3.2 1B Instruct
Llama 3.2 1B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (3B).
Llama 3.2 3B Instruct
Llama 3.2 3B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (1B).
We plan to make more models available for direct use in the coming months.
How to Create a Project
Creating new projects has never been easier. To get started, go to thehive.ai and click on the “Go to Dashboard” button in the top-right corner.
If you are not logged in, the “Go to Dashboard” button will redirect you to the sign in page. Then, either sign in to an existing account or click the blue “Sign up” hyperlink at the bottom of the page to sign up for a new account.
You will receive an email to verify your account after signing up. After you’ve either logged into an existing account or verified your new account, you will be redirected to the main dashboard.
For new accounts, a new organization named “(User Name)’s personal organization” will be automatically created. Your current organization will be visible in the top-right corner. Before you can submit tasks, you will need to accept the Terms of Use and add credits to your account. To accept the Terms of Use, click the “View Terms and Conditions” button at the bottom of the page. You will need to do this for every additional organization you create.
To add funds to your credit balance, locate the “Billing” section in the bottom-left corner of the dashboard and click the blue “Add Credit” button, which will redirect you to another page where you can add a payment method.
Now you’re ready to create your own projects. On any page, click on the “Products” tab on the left side of the header. From the dropdown menu that appears, select “Models.” It will redirect you to the following page, where you can view all of your current projects.
To create a new project, click on the plus (+) sign next to “Projects” on the top-left side of the screen. You will be redirected to the following page, where you can choose your project type. Select “Hive Models.”
Then, you will be redirected to another page containing our available models. Click to select the desired model for your project.
After selecting your desired model, you will need to configure your project. Change your project’s name using the text box below. Once you hit the blue “Create” button, your project will be live.
Upon project creation, you will be redirected to the following interface. Here, you can view your API key by clicking the “API Keys” Button on the top right.
Using this API key, you can call the API by making a cURL request in your terminal. To interpret the results, please refer to our documentation and look up the relevant model and its class definitions.
For pricing details, please reference our model pricing table here. If you run into any issues building your projects, please feel free to reach out to us at support@thehive.ai and we will be happy to help. If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
Hive is excited to announce the groundbreaking integration of our proprietary AI models with NVIDIA NIM. Our collaboration will allow, for the first time, Hive customers to deploy our industry-leading AI models in private clouds and on-premises data centers. We are also announcing that for the remainder of 2024, internet social platforms can receive up to 90 days of free trial access to our models. To learn more, check out the press release here.
The first Hive models to be made available with NVIDIA NIM are our AI-generated content detection models, which allow customers to identify AI-generated images, video, and audio. However, we plan to make additional models available through NVIDIA NIM in the coming months, including content moderation, logo detection, optical character recognition, speech transcription, custom models through Hive’s AutoML platform, and more.
Secure and Accelerated Deployments with NIM
Short for NVIDIA Inference Microservices, NIM provides models as optimized containers to prospective customers. This enables organizations to run AI models on NVIDIA GPUs on private clouds, workstations, and on-premises. NVIDIA NIM is part of the NVIDIA AI Enterprise software platform and connects the power of the Hive’s proprietary AI models, securely deployed on NVIDIA’s accelerated infrastructure, with enterprise customers everywhere.
While Hive’s cloud-based APIs process billions of customer requests every month, among prospective customers’ top requests has been the ability to deploy Hive models in private clouds or on-premises. These are often enterprises whose strict data governance standards challenge the use of our cloud-based APIs. Our integration with NIM solves this challenge.
How Customers Use Our Leading AI Detection Models
Our AI-detection tools—the first Hive models to be made available with NVIDIA NIM—have been widely recognized as best-in-class, including by an independent research study from the University of Chicago. The researchers found that Hive’s model was the “clear winner” against both its automated competitors and highly-trained human experts in classifying images as either AI-generated or human-created.
With generative AI on the rise, Hive’s AI detection models have become crucial in combating the technology’s misuse. Here are select ways that customers use our models to protect themselves from the potential misuse of AI-generated and synthetic content.
Internet social platforms leverage our AI detection models to proactively screen content for the presence of AI-enabled misinformation in real time. Digital platforms can leverage our detections to provide transparency to their users by tagging content as AI-generated, or moderate potential misinformation by implementing sitewide bans.
Insurance companies use our models to automate the process of identifying AI-enabled fraud in evidence submitted with insurance claims. By scanning claims evidence for AI-generated augmentations, insurers can quickly, confidently and securely weed out fraud, saving significant cost from paying out fraudulent claims.
Banks, brokers, and other financial institutions use our AI-generated content detection models to secure their user identification verification and KYC processes, leveraging Hive’s industry-leading AI-generated audio detection model to verify voice recognition workflows and prevent sophisticated financial fraud.
Digital marketplaces use our models to automate the detection and moderation of fraudulent listings. Moreover, marketplaces protect their customers’ experience by verifying that both users and their product reviews are authentic.
Video conferencing and live streaming platforms integrate our AI detection models to authenticate video and audio in real time, preventing both impersonation and the misuse of likenesses.
While not all-encompassing, these are select ways that customers use our models today.
Managing the Risks of Generative AI
The increasing accessibility of Generative AI tools poses a newfound set of risks to companies and organizations. It can be difficult to moderate the proliferation of AI-generated content in a scalable, automated and secure way. We are proud to provide a solution that supports our customers in managing these risks, now made more accessible for enterprises to deploy on-premises or in private clouds with NVIDIA NIM.
If you’re interested in accessing Hive’s AI models through NVIDIA NIM, you can learn more on our website here or on NVIDIA’s website here. If you have any questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
To the untrained eye, distinguishing human-created art from AI-generated content can be difficult. Hive’s commitment to providing customers with API-accessible solutions for challenging problems led to the creation of our AI-Generated Image and Video Detection API, which classifies images as human-created or AI-generated. Our model was evaluated in an independent study conducted by Anna Yoo Jeong Ha and Josephine Passananti from the University of Chicago, which sought to determine who was more effective at classifying images as AI-generated: humans or automated detectors.
Ha and Passananti’s study addresses a growing problem within the generative AI space: As generative AI models become more advanced, the boundary between human-created art and AI-generated images has become increasingly indistinguishable. With such powerful tools being accessible to the general public, various legal and ethical concerns have been raised regarding the misuse of said technology.
Such concerns are pertinent to address because the misuse of generative AI models negatively impacts both society at large and the AI models themselves. Bad actors have used AI-generated images for harmful purposes, such as spreading misinformation, committing fraud, or scamming individuals and organizations. As only human-created art is eligible for copyright, businesses may attempt to bypass the law by passing off AI-generated images as human-created. Moreover, multiple studies (on both generative image and text models) have shown evidence that AI models will deteriorate if their training data solely consists of AI-generated content—which is where Hive’s classifier comes in handy.
The study’s results show that Hive’s model outperforms both its automated peers and highly-trained human experts in differentiating between human-created art versus AI-generated images across most scenarios. This post examines the study’s methodologies and findings, in addition to highlighting our model’s consistent performance across various inputs.
Structuring the Study
In the experiment, researchers evaluated the performance of five automated detectors (three of which are commercially available, including Hive’s model) and humans against a dataset containing both human-created and AI-generated images across various art styles. Humans were categorized into three subgroups: non-artists, professional artists, and expert artists. Expert artists are the only subgroup with prior experience in identifying AI-generated images.
The dataset consists of four different image groups: human-created art, AI-generated images, “hybrid images” which combine generative AI and human effort, and perturbed versions of human-created art. A perturbation is defined as a minor change to the model input aimed at detecting vulnerabilities in the model’s structure. Four perturbation methods are used in the study: JPEG compression, Gaussian noise, CLIP-based Adversarial Perturbation (which performs perturbations at the pixel level), and Glaze (a tool used to protect human artists from mimicry by introducing imperceptible perturbations on the artwork).
After evaluating the model on unperturbed imagery, the researchers proceeded to more advanced scenarios with perturbed imagery.
Evaluation Methods and Findings
The researchers evaluated the automated detectors on four metrics: overall accuracy (ratio of training data classified correctly to the entire dataset), false positive rate (ratio of human-created art misclassified as AI-generated), false negative rate (ratio of AI-generated images misclassified as human-created), and AI detection success rate (ratio of AI-generated images correctly classified as AI-generated to the total amount of AI-generated images).
Among automated detectors, Hive’s model emerged as the “clear winner” (Ha and Passananti 2024, 6). Not only does it boast a near-perfect 98.03% accuracy rate, but it also has a 0% false positive rate (i.e., it never misclassifies human art) and a low 3.17% false negative rate (i.e., it rarely misclassifies AI-generated images). According to the authors, this could be attributed to Hive’s rich collection of generative AI datasets, with high quantities of diverse training data compared to its competitors.
Additionally, Hive’s model proved to be resistant against most perturbation methods, but faced some challenges classifying AI-generated images processed with Glaze. However, it should be noted that Glaze’s primary purpose is as a protection tool for human artwork. Glazing AI-generated images is a non-traditional use case with minimal training data available as a result. Thus, Hive’s model’s performance with Glazed AI-generated images has little bearing on its overall quality.
Final Thoughts Moving Forward
When it comes to automated detectors and humans alike, Hive’s model is unparalleled. Even compared to human expert artists, Hive’s model classifies images with higher levels of confidence and accuracy.
While the study considers the model’s potential areas for improvement, it is important to note that the study was published in February 2024. In the months following the study’s publication, Hive’s model has vastly improved and continues to expand its capabilities, with 12+ model architectures added since.
If you’d like to learn more about Hive’s AI-Generated Image and Video Detection API, a demo of the service can be accessed here, with additional documentation provided here. However, don’t just trust us, test us: reach out to sales@thehive.ai or contact us here, and our team can share API keys and credentials for your new endpoints.
Three complementary APIs to understand and protect proprietary content
Hive
We are excited to launch a new product suite that is purpose-built to empower our customers to protect their own IP or proactively monitor digital platforms for the potential misuse of others’ IP.
Hive’s Intellectual Property and Publicity Detection suite consists of three complementary cloud-based APIs:
Media Search API: identifies when copies and variants of content from thousands of movies and TV shows are being used.
Likeness Detection API: identifies the “likeness” of the most popular characters or artworks in images across a wide breadth of IP domains, based on their defining characteristics.
Celebrity Recognition API: detects the presence of well-known figures in images. It’s powered by our face detection and face similarity models and a curated and constantly updated Custom Search Index.
All three of these APIs boast comprehensive indexes that are proactively updated. Each API is seamless to integrate and can be built into any application with just a few lines of code. Importantly, with Hive, our customers can achieve speed at scale, as we serve real-time responses to billions of API calls each month.
Media Search API
Hive’s Media Search API automates human-like visual analysis to catch reposts of movies and TV shows. Our Media Search API is a powerful tool for both digital platforms who want to avoid hosting copyright-protected media, as well as content providers and streaming sites looking to be alerted to unauthorized reposts of their proprietary content on digital platforms.
Our Media Search API detects not only exact duplicates, but also modified versions, leveraging our Image Similarity Model. This includes manual image manipulations like rotations and text overlays, as well as more subtle augmentations such as introduction of noise, filters, and other pixel-level changes.
Additionally, for each query, the Media Search API response includes valuable metadata such as IMDB ID, content type (movie or TV show), title, relevant timestamps, and season and episode numbers (if applicable). This metadata empowers our customers to have the full context surrounding this APIs match results.
Finally, Hive’s Media Search API brings to bear a comprehensive search index that is regularly and proactively updated, so our matches are always up-to-date. You can learn more about Hive’s Media Search API on our documentation page.
Likeness Detection API
To complement our Media Search API, we are launching our Likeness Detection API, which identifies a comprehensive set of characters and artworks across the most well-known intellectual property domains.
Hive’s Likeness Detection API is trained on thousands of images per character or artwork across a wide breadth of domains in which that particular subject may have appeared. As a result, our Likeness Detection API is able to identify the “likeness” of well-known characters in any context, based on their defining characteristics. For example, our Likeness Detection API understands that blue costume + red cape + “S” emblem represents the likeness of a certain Kryptonian superhero, whether that subject appears in a live action film, cartoon, halloween costume, or AI-generated image.
Like our Media Search API, our Likeness Detection API is a powerful tool for digital platforms to proactively avoid hosting copyright-protected content, as well as for content creators and streaming platforms to monitor for the misuse of their proprietary content.
However, Hive’s Likeness Detection API also empowers Generative AI platforms to proactively filter and remove potentially copyright-protected characters buried in their datasets, before training text-to-image models. Of course, Likeness Detection API is also capable of detecting the likeness of characters within AI-generated images themselves, which may be highly stylized.
Finally, beyond monitoring for the potential misuse of proprietary content, digital platforms can leverage our Likeness Detection API to more deeply understand the content that their users are engaging with. Understanding the popular IP that users are posting and sharing is a valuable tool for contextual ad-targeting and improving content recommendation systems. Visit our documentation page to learn more about Hive’s Likeness Detection API.
Celebrity Recognition API
Rounding out Hive’s IP and Publicity Detection suite is our Celebrity Recognition API, which enables our customers to identify thousands of celebrities, politicians, athletes, and other well-known public figures in images and videos.
Hive’s Celebrity Recognition API automates human-like perceptual comparisons to identify any public figures visible in an image or video. Our Celebrity Recognition API is powered by our face detection and face similarity models and a curated and constantly updated Custom Search Index. Given an input image, Hive detects all faces present and returns a bounding box and a match for each, as well as a confidence score. When the face does not belong to a celebrity, the string returned is “No Match” and no confidence score is returned.
Paired with Hive’s AI-Generated Content Classification APIs, social platforms can use our Celebrity Recognition API to prevent the proliferation of political or personal misinformation by filtering content for specific well known figures, as well as screening for deepfakes or AI-generated content.
Additionally, digital platforms can use our Celebrity Recognition API to easily sort and tag large media libraries by automatically detecting which celebrities are present. Similarly, streaming platforms and online media databases can quickly identify which actors appear in any frame of films, TV shows, interviews, and more in order to highlight specific actor details to enrich their user experiences.
Finally, Hive’s Celebrity Recognition API can equip celebrities themselves, or the agencies who represent them, to monitor digital platforms for potential misuse of their likeness, enabling proactive brand protection for well-known public figures. To learn more, check out our documentation page for Celebrity Recognition API.
How you can Use IP and Publicity Detection Products Today
With our launch of Hive’s IP and Publicity Detection Products, Hive is bringing to market a comprehensive suite of AI models for understanding and protecting content. However, don’t just trust us, test us: reach out to sales@thehive.ai and our team can share API keys and credentials for your new endpoints.
Hive's Innovative Integration with Thorn's Safer Match
Hive
We are excited to announce that Hive’s Partnership with Thorn is now live! Our current and prospective customers can now easily integrate Thorn’s Safer Match, a CSAM (child sexual abuse material) detection solution, using Hive’s APIs.
The Danger of CSAM
The threat of CSAM involves the production, distribution, and possession of explicit images and videos depicting minors. Every platform with an upload button or messaging capabilities is at risk of hosting child sexual abuse material (CSAM). In fact, in 2023 alone, there were over 104 million reports of potential CSAM reported to the National Center of Missing and Exploited Children.
The current state-of-the-art approach is to use an encrypting function to “hash” the content and then “match” it against a database aggregating 57+ million verified CSAM hashes. If the content hash matches against the database, then the content can be flagged as CSAM.
How the Integration Works
When presented with visual content, we first hash it, then match it against known instances of CSAM.
Hashing: We take the submitted image or video, and convert it into one or more hashes.
Deletion: We then immediately delete the submitted content ensuring nothing stays on Hive’s servers.
Matching: We match the hashes against the CSAM database and return whether the hashes match or not to you.
Hive’s partnership with Thorn allows our customers to easily incorporate Thorn’s Safer Match into their detection toolset. Safer Match provides programmatic identification of known CSAM with cryptographic and perceptual hash matching for images and for videos, through proprietary scene-sensitive video hashing (SSVH).
How you can use this API today:
First, talk to your Hive sales rep, and get an API key and credentials for your new endpoint.
Image
For an image, simply send the image to us, and we will hash it using MD5 and Safer encryption algorithms. Once the image is hashed, we return the results in our output JSON.
Video
You can also send videos into the API. We use MD5 hashes and Safer’s proprietary perceptual hashing for videos as well. However, they have different use cases. MD5 will return exact match videos and will only indicate whether the whole video is a known CSAM video.
Additionally, Safer will hash different scenes within the video and will flag those which are known to be violating. Safer scenes are demarcated by a start and end timestamp as shown in the response below.
Note: For the Safer SSVH, videos are sampled at 1FPS.
For more information, you can reference our documents.
Teaming Up For a Safer Internet
CSAM is one of the most pervasive and harmful issues on the internet today. Legal requirements make this problem even harder to tackle, and previous technical solutions required significant integration efforts. But, together with Thorn’s proactive technology, we can respond to this challenge and help make the internet a safer place for everyone.
Hive’s AutoML platform allows anyone the opportunity to create best-in-class machine learning solutions for the particular issues they face. Our platform can create classification and large language models for an endless range of use cases. If you need a model that bears no resemblance whatsoever to any pre-trained model we offer, no problem! We’ll help you build one yourself.
Hive AutoML uses the same technology behind our industry-leading ML tools to create yours. This way you get the best of both worlds — Hive’s impeccable model performance and a tool custom-built to address your needs.
Hive AutoML for Content Moderation
Today we’ll be focusing on one particular application of our AutoML platform: customizing our moderation models. These models kickstarted our success as a company and are used by many of the largest online platforms in the world. But the moderation guidelines of many sites differ from each other, and sometimes our base moderation models don’t quite fit them.
With AutoML, you can create your own version of our moderation models by fine-tuning our pre-existing heads or adding new heads entirely. We will then train a version of our high-performing base model with your added data to create a tool that best suits your platform’s moderation process.
In this blog post, we’ll walk through both how to add more data to an existing Hive moderation head and how to add a new custom moderation head. We’ll demonstrate the former while building a visual moderation model and the latter on a text moderation model. Audio moderation is not currently supported on AutoML.
Building a Visual Moderation Model
Hive AutoML for Visual Moderation allows you to customize our Visual Moderation base model to fit your specific needs. Using your own data, you can add new model heads or fine-tune any of the existing 45+ subclasses that we provide as part of our Visual Moderation tool. A full list of these classes is available here.
For this walkthrough, we’ll be fine-tuning the tobacco head. Our data will thus include images and labels for this head only. The resulting model will include all Hive visual moderation heads, with the tobacco head re-trained to incorporate this new data.
Uploading Your Dataset
Before you start building your model, you first need to upload any datasets you’ll use to the Dataset section of our AutoML platform. For Visual Moderation model training, we require a CSV file with a column for your image data (as publicly accessible image URLs) and an additional column for each head you wish to train.
For this tutorial, we’re going to train using additional data for the tobacco class. The below CSV includes image URLs and a column of labels for that head.
After you’ve selected your dataset file, you’ll be asked to confirm the column mapping. Make sure the columns of your dataset have been interpreted correctly and that you have the correct format (image or text) selected for each column.
Once you’ve confirmed your mapping, you can preview and edit your data. This page opens automatically after any dataset upload. You will be able to check whether all images were uploaded successfully, view the images themselves, and change their respective labels if desired. You can also add or delete any data that you wish to before you proceed onto model training.
Creating a Dataset Snapshot
When you’re happy with your dataset, you’ll then need to create a snapshot from it. A snapshot is a point-in-time export of a dataset that validates that dataset for training. Once a snapshot is created, its contents cannot be changed. This means that while you can continue to edit your original dataset, your snapshot will not change along with it — if you make any changes, you’ll need to create a new snapshot after you’re finished with your changes.
You can create a snapshot from any live dataset. To do so, simply click the “Create Snapshot” button on that dataset’s detail page. You’ll be prompted to provide some information, most notably which columns to use for image input and data labels. After your snapshot is successfully created, you’re ready to start training!
Creating a New Model
To create a training, you can select the “Create Model” button on the snapshot detail page. You’ll once again be asked to provide several pieces of information, including your model’s name, description, base model, and datasets. Make sure to select “Hive Vision Moderation” under the “Base Model” category as opposed to a general image classification model.
You can choose to upload a separate test dataset or split off a random section of your training dataset to use instead. If you choose to upload a separate test dataset, this dataset must contain the same heads and classes as your training dataset. After uploading your dataset, you will also need to create a snapshot of that dataset before you begin model training.
If you choose to split off a section of your training dataset, you will be able to choose the percentage of that dataset that you would like to use for testing as you create your training.
Before you begin your training, you are also able to edit some training preferences such as maximum number of training epochs, model selection rule, model selection label, early stopping, and invalid data criteria. If you’re unsure what any of these options are, there is a little information icon next to each that will explain what is meant by that setting.
After uploading your training (and, if desired, test) dataset and selecting your desired training options, you’re ready to create your model. After you begin training, your model will be ready within 20 minutes. You will automatically be directed to the model’s detail page, where you can watch its progress as it trains.
Playground and Metrics: Evaluating Your Model
When your model has completed its training, the model’s detail page will display a variety of metrics in order to help you analyze your model’s performance. At the top of the page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You can toggle whether these metrics are calculated by head overall or by each class within a head.
Below these numbers, you’ll also be able to view an interactive precision/recall (PR) curve. This is the gold-standard metric for a classification model and gives you more insight into how your model balances the inherent tradeoff between high precision and high recall.
You’ll then be shown a confusion matrix, which is an exact breakdown of the true positives, false positives, true negatives, and false negatives of the model’s results. This can highlight particular weak spots of your model and potential areas you may want to address with further training. As shown below, our example model has no false positives but several false negatives — images with tobacco that were classified as “no_tobacco.”
The final section of our metrics page is an area called the “playground.” The playground allows you to test your newly created AutoML model by submitting sample queries and viewing the responses. This feature is another great way to explore the way that your model responds to different kinds of prompts and the areas in which it could improve. You are given 500 free sample queries — beyond that you will be prompted to deploy your model with the cost of each submission charged to your organization’s billing account.
To test our tobacco model, we submitted the following sample image. To the right of it you can see the results for each Hive visual moderation class, including tobacco where it is classified correctly with a perfect confidence score or 1.00.
Deploying Your Model
To begin using your model, you can create a deployment from it. This will open the project on Hive Data, where you will be able to upload tasks, view tasks, and access your API key as you would with any other Hive Data project. An AutoML project can have multiple active deployments at one time.
Building a Text Moderation Model
Just like for Visual Moderation, our AutoML platform allows you to customize our Text Moderation base model to fit your particular use cases by adding or re-training model categories. The full class definitions for all 13 of our currently offered heads are available here. For this section of the walkthrough, we will be creating a new custom head in order to add capabilities to our model that we don’t currently offer: sentiment analysis.
Sentiment analysis is the task of categorizing the emotional tone of a piece of text, typically into two labels: positive or negative. Occasionally there may be a sentiment analysis task that breaks the sentiment down into more specific categories, such as joyful, angry, etc. Adding this kind of information to our existing Hive Text Moderation model could prove useful for platforms that wish to either exclude negative content on sites for children or to put limits on certain comment sections or forums where negative commentary is unwanted.
Sentiment analysis is a complex problem, since it is a language-based task. Understanding the meaning and tone of a sentence is not always easy even for humans. To keep it simple, we’ll just be using the two possible classifications of positive and negative.
Uploading Your Dataset
Similarly to creating a Visual Moderation model, you’ll need to upload your data as a CSV file to the “Data” section of our AutoML platform prior to model training. The format of our sentiment analysis dataset is shown below, though the column names do not need to be anything specific in order to be processed correctly.
After uploading your dataset, you’ll be asked to confirm the format of each column as either text, images, or JSONs. If you’d like to disregard that column entirely, that is also an option to “Ignore Column.” After you hit confirm, you can preview and edit your dataset just as you could with your image dataset in the Visual Moderation example. The preview page for text datasets is shown below.
Creating a Dataset Snapshot
As described in the Visual Moderation walkthrough, you’ll need to create a snapshot of your dataset in order to validate it prior to model training. When making your snapshot, make sure that you select “Text Classification” as your “Snapshot Type.” This will ensure that your snapshot is sufficient to train a Text Moderation model. You will also need to specify which column contains your text input and which contains the labels for that text input, as shown below for our dataset.
In the example above, we’ve selected our “text_data” column as our input and our “sentiment” column as our training labels.
Creating a New Model
After you’ve created your snapshot, you’ll automatically be brought to that snapshot’s detail page. From this page, starting a new model training is as easy — just hit the big “Create New Model” button on the top right. You’ll be asked to name your model and provide a few key details about the training, such as which snapshots you’d like to use as your data and how many times a training will cycle through that data.
Make sure you’ve selected “Text Classification” as your model type and “Hive Text Moderation” as your base model. Then you’re ready to start your training! Model training takes up to 20 minutes depending on several factors including the size of your dataset. Most take only several minutes to complete.
Metrics and Model Evaluation
Once your training has completed, you’ll be redirected to the details page for your new moderation model. On this page, you’ll be shown the model’s precision, recall, balanced accuracy, and F1 score. You will also be able to view a precision/recall (P/R) curve and confusion matrix in order to further analyze the performance of your model.
The overall performance of the model is pretty good for a difficult task such as sentiment analysis. While there is room for improvement, this first round of training indicates that with some additional data we could likely bring all metrics above 90%. The confusion matrix for this model indicates that a specific area of weakness for this model is false negatives, to which a possible solution would be to increase the amount of positive examples in the data and observe if this improves model results.
We do not currently offer the playground feature for text moderation models, though we are working on this and expect it to be released in the coming months.
Deploying Your Model
The process for deploying your model is identical to the way we deployed our Visual Moderation model in the first example. To deploy any model, simply click “Create Deployment” from that model’s details page. Once deployed, you can access your unique API keys and begin to submit tasks to the model like any other Hive model.
Final Thoughts
We hope this in-depth walkthrough was helpful. If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.
Hive’s AutoML platform allows you to quickly train, evaluate, and deploy machine learning models for your own custom use cases. The process is simple — just select your desired model type, upload your datasets, and you’re ready to begin training!
Since we announced the initial release of our AutoML platform, we’ve added support for Large Language Model training. Now you can build everything from classification models to chatbots, all in the same intuitive platform. To illustrate how easy the model-building process is, we’ll walk through it step-by-step with each type of model. We’ll also provide a link to the publicly available dataset we used as an example so that you can follow along.
Training an Image Classification Model
First we’re going to create an Image Classification model. This type of model is used to identify certain subjects, settings, and other visual attributes in both images and videos. For this example, we’ll be using a snacks dataset to identify 20 different kinds of food (strawberries, apples, hot dogs, cupcakes, etc.). To follow along with this walkthrough, first download the images from this dataset, which are sorted into separate files for each label.
Formatting the Datasets
After downloading the image data, we’ll need to put this data in the correct format for our AutoML training. For Image Classification datasets, the platform requires a CSV file that contains one column for image URLs titled “image_url” and up to 20 other columns for the classification categories you wish to use. This requires creating publicly accessible links for each image in the dataset. For this example, all 20 of our food categories will be part of the same head — food type. To do this, we formatted our CSV as follows:
This particular dataset is within the size limitations for Image Classification datasets. When uploading your own dataset, it is crucial that you ensure it meets all of the sizing requirements and other specifications or the dataset upload will fail. These requirements can be found in our AutoML documentation.
Both test and validation datasets are provided as part of the snacks dataset. When using your own datasets, you can choose to upload a test dataset or to split off a random section of your training data to use instead. If you choose the latter, you will be able to select what percentage of that data you want you use as test data as you create your training.
Uploading the Datasets
Before we start building the model, we first need to upload both our training and test datasets to the “Datasets” section of our AutoML platform. This part of our platform validates each dataset before it can be used for training as well as stores all datasets to be easily accessed for future models. We’ll upload both the training and test datasets separately, naming them Snacks (Train) and Snacks (Test) respectively.
Creating a Training
To start building your model, we’ll head to our AutoML platform and select the “Create New Model” button. We’ll then be brought to a project setup page where we will be prompted to enter a project name and description. For Model Type, we’ll select “Image Classification.” On the right side of the screen, we can add our training dataset by selecting from our dataset library. We’ll select the datasets called Snacks (Train) and Snacks (Test) that we just uploaded.
And just like that, we’re ready to start training our model! To begin the training process, we’ll click the “Start Training Model” button. The model’s status will then shift to “Queued” and then “In Progress” while we train the model. This will likely take several minutes. When training is complete, the status will display as “Completed.”
Evaluating the Model
After model training is complete, the page for that project will show various performance metrics so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.
Below that, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that two images of cupcakes were incorrectly classified as cookies — an understandable mistake as the two are both decorated desserts.
These detailed metrics can help us to know what categories to target if we want to train a better version of the model. If you would like to retrain your model, you can also click the “Update Model” to begin the training process again.
Deploying the Model
Even after the first time training this model, we’re pretty happy with how it turned out. We’re ready to deploy the model and start using it. To deploy, select the project and click the “Create Deployment” button in the top right corner. The project’s status will shift to “Deploying.” The deployment may take a few minutes.
Submitting Tasks via API
After the deployment is complete, we’re ready to start submitting tasks via API as we would any pre-trained Hive model. We can click on the name of any individual deployment to open the project on Hive Data, where we can upload tasks, view tasks, and access our API key. There is also a button to “Undeploy” the project, if we wish to deactivate it at any point. Undeploying a model is not permanent — we can redeploy the project if we later choose to.
To see a video of the entire training and deployment process for an Image Classification model, head over to our Youtube channel.
Training a Text Classification Model
We’ll now walk through that same training process in order to build a Text Classification model, but with a few small differences. Text classification models can be used to sort and tag text content by topic, tone, and more. For this example, we’ll use the Twitter Sentiment Analysis dataset posted by user carblacac on Hugging Face. This dataset consists of a series of short text posts originally published to Twitter and whether they have a negative (0) or positive (1) overall sentiment. To follow along with this walkthrough, you can download the dataset here.
Formatting the Datasets
For Text Classification datasets, our AutoML platform requires a CSV with the text data in a column titled “text_data” and up to 20 other columns that each represent classification categories, also called model heads. Using the Twitter Sentiment Analysis dataset, we only need to rename the columns like so:
The data consists of two sets, a training set with 150k examples and a test set with 62k examples. Before we upload our dataset, however, we must ensure that it fits our Text Classification dataset requirements. In the case of the training set, it does not fit those requirements — our AutoML platform only accepts CSV files that have 100,000 rows or less and this one has 150,000. In order to use this dataset, we’ll have to remove some examples from the set. In order to keep the number of examples for each class relatively equal, we removed 25,000 negative (0) examples and 25,000 positive (1) ones.
Uploading the Datasets
After fixing the size issue, we’re ready to upload our datasets. As is the case with all model types, we must first upload any datasets we are going to use before we create our training.
Creating a Training
After both the training and test datasets have been validated, we’re ready to start building your model. On our AutoML platform, we’ll click the “Create New Model” button and enter a project name and description. For our model type, this time we’ll select “Text Classification.” Finally, we’ll add our training and test datasets that we just uploaded.
We’re then ready to start training! This aspect of the training process is identical to the one shown above for an Image Classification model. Just click the “Start Training Model” button on the bottom right corner of the screen. When training is complete, the status will display as “Completed.”
Evaluating the Model
Just like in our Image Classification example, the project page will show various performance metrics after training is complete so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.
Below the precision, recall, and balanced accuracy, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that while there were a fair amount of mistakes for each class, there were more cases in which a positive example was mistaken for a negative than the other way around.
While the results of this training are not as good as our Image Classification example, this is somewhat expected — sentiment analysis is a more complex and difficult classification task. While this model could definitely be improved by retraining with slightly different data, we’ll demonstrate how to deploy it. To retrain your model, however, all you need to do is click the “Update Model” button and begin the training process again.
Deploying the Model
Deploying your model is the exact same process as described above in the Image Classification example. After the deployment is complete, you’ll be able to view the deployment on Hive Data and access the API keys needed in order to begin using the model.
To see a video of the entire training and deployment process for a Text Classification model, head over to our Youtube channel.
Training a Large Language Model
Finally, we’ll walk through the training process for a Large Language Model (LLM). This process is slightly different from the training process for our classification model types, both in terms of dataset formatting and model evaluation. Our AutoML platform supports two different types of LLMs: Text and Chat. Text models are geared towards generating passages of writing or lines of code, whereas chat models are built for interactions with the user, often in the format of asking questions and receiving concise, factual answers. For this example, we’ll be using the Viggo dataset uploaded by GEM to Hugging Face. To follow along with us as we build the model, you can download the training and test sets here.
Formatting the Datasets
This dataset supports the task of summarizing and restructuring text into a very specific syntax. All data is within the video game domain, and all prompts take the form of either questions or statements about various games. The goal of the model is to take these prompts, extract the main idea behind them, and reformat them. For example, the prompt “Guitar Hero: Smash Hits launched in 2009 but plays like a game from 1989, it’s just not good” becomes “give_opinion(name[Guitar Hero: Smash Hits], release_year[2009], rating[poor]).”
First, we’ll check to make sure this dataset is valid per our guidelines for AutoML datasets. The size is well under the limit of 50,000 rows with only around 5,000. All that needs to be done to make sure that the formatting is correct is make sure that the prompt is in a column titled “prompt” and the expected completion is in another column titled “completion.” All other columns can be removed. From this dataset, we will use the column “target” as “prompt” and the column “meaning_representation” as “completion.” The final CSV is as shown below:
Uploading the Datasets
Now let’s upload our datasets. We’ll be using both the training and test datasets from the Viggo dataset as provided here. After both datasets have been validated, we’re ready to train the model.
Creating a Training
We’ll head back to our Models page and select “Create New Model”. This time, the project type should be “Language Generative – Text”. We will then choose our training and test datasets from a list of ones that we’ve already uploaded to the platform. Then we’ll start the training!
Evaluating the Model
For Large Language Models, the metrics page looks a little different than it does for our classification models.
The loss measures how closely the model’s response matches the response from the test data, where 0 represents a perfect prediction, and a higher loss signifies that the prediction is increasingly far from the actual response sequence. If the response has 10 tokens, we let the model predict each of the 10 tokens given all previous tokens are the same and display the final numerical loss value.
You can also evaluate your model by interacting with it in what we call the playground. Here you can submit prompts directly to your model and view its response, allowing model evaluation through experimentation. This will be available for 15 days after model training is complete, and has a limit of 500 requests. If either the time or request limit is reached, you can instead choose to deploy the model and continue to use the playground feature with unlimited uses which will be charged to the organization’s billing account.
For our Viggo model, all metrics are looking pretty good. We entered a few prompts into the playground to further test it, and the results showed no issues.
Deploying the Model
The process to deploy a Large Language Model is the same as it is for our classification models. Just click “Create Deployment” and you’ll be ready to submit API requests in just a few short minutes.
To see a video of the entire training and deployment process for an LLM, head over to our Youtube channel.
Final Thoughts
We hope this in-depth walkthrough of how to build different types of machine learning models with our AutoML platform was helpful. Keep an eye out for more AutoML tutorials in the coming weeks, such as a detailed guide to Retrieval Augmented Generation (RAG), data stream management systems (DSMS), and other exciting features we support.
If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.
Dataset Sources
All datasets that are linked to as examples in this post are publicly available for a wide range of uses, including commercial use. The snacks dataset and viggo dataset are both licensed under a Creative Commons Attribution Share-Alike 4.0 (CC BY-SA 4.0) license. They can be found on Hugging Face here and here. The Twitter Sentiment Analysis dataset is licensed under the Apache License, Version 2.0. It is available on Hugging Face here. None of these datasets may be used except in compliance with their respective license agreements.