We are thrilled to announce that Hive is the lead sponsor of the Trust & Safety Summit 2025.
As Europe’s premier Trust & Safety conference, this summit is designed to empower T&S leaders to tackle operational and regulation challenges, providing them with both actionable insights and future-focused strategies. The summit will be held Tuesday, March 25th and Wednesday, March 26th at the Hilton London Syon Park, UK.
The 2-day event will explore themes such as regulatory preparedness, scaling trust and safety solutions, and best practices for effective content moderation. An incredible selection of programming will include expert-led panels, interactive workshops and networking events.
Hive’s CEO Kevin Guo will deliver the keynote presentation on “The Next Frontier of Content Moderation”, covering topics like multi-modal LLMs and detecting AI generated content. Additionally, Hive will host two panels during the event:
Hyperscaling Trust & Safety:Navigating Growth While Maintaining Integrity. Hive will be discussing best practices for scaling trust & safety systems for online platforms experiencing hypergrowth.
Harnessing AI to Detect Unknown CSAM: Innovations, Challenges, and the Path Forward. Hive will be joined by partners Thorn and IWF to discuss recent advancements in CSAM detection solutions.
As the lead sponsor of the T&S Summit 2025, we are furthering our commitment to making the internet a safer place. Today, Hive’s comprehensive moderation stack empowers Trust & Safety teams of all sizes to scale their moderation workflows with both pre-trained and customizable AI models, flexible LLM-based moderation, and a moderation dashboard for streamlined enforcement of policies.
We look forward to welcoming you to the Trust & Safety Summit 2025. If you’re interested in attending the conference, please reach out to your Hive account manager or sales@thehive.ai. Prospective conference attendees can also find more details and ticket information here. For a detailed breakdown of summit programming, download the agenda here.
To learn more about what we do at Hive, please reach out to our sales team or contact us here for further questions.
Hive is thrilled to announce that we’re releasing Moderation 11B Vision Language Model. Fine-tuned on top of Llama 3.2 11B Vision Instruct, Moderation 11B is a new vision language model (VLM) that expands our established suite of text and visual moderation models. Building on our existing capabilities, this new model offers a powerful way to handle flexible and context-dependent moderation scenarios.
An Introduction to VLMs and Moderation 11B
Vision language models (VLMs) are models that can learn from image and text inputs. This ability to simultaneously process inputs across multiple modalities (e.g. images and text) is known as multimodality. While VLMs share similar functions with large language models (LLMs), traditional LLMs cannot process image inputs.
With Moderation 11B VLM, we leverage unique multimodal capabilities to extend our existing moderation tool suite. Beyond its multimodality, Moderation 11B VLM can incorporate additional contextual information, which is not possible with our traditional classifiers. The model’s baked-in knowledge, combined with insights trained from our classifier dataset, enables a more comprehensive approach to moderation.
Moderation 11B VLM is trained on all 53 public heads of our Visual Moderation system, recognizing content across distinct categories such as sexual content, violence, drugs, hate, and more. Because of these enhancements, it becomes a valuable addition to our existing Enterprise moderation classifiers, helping to capture a wide range of flexible and alternative cases that can arise in dynamic workflows.
Potential Use Cases
Moderation 11B VLM applies to a broad range of use cases, notably surpassing Llama 3.2 11B Vision Instruct in identifying contextual violations and handling unseen data in our internal tests. Below are some potential use cases where our model performs well:
Contextual violations: Cases where individual inputs alone may not be flagged as violations, but all inputs contextualized together makes it one. For example, a text message could appear harmless on its own, yet the preceding conversation context reveals it to be a violation.
Multi-modal violations: Situations where both text and image inputs are important. For instance, analyzing a product image alongside its description can uncover violations that single-modality models would miss.
Unseen data: Inputs that the model has not previously encountered. For example, customers may use Moderation 11B VLM to ensure that user content aligns with newly introduced company policies.
Below are graphical representations of how our fine-tuned Moderation 11B model performed in our internal testing compared to the Llama 3.2 11B Vision Instruct model. We assessed their respective F1 scores, a metric that combines both precision and recall. The F1 score was computed using the standard formula: F1 = 2 * (precision * recall) / (precision + recall).
In our internal evaluation, we tasked both our Moderation 11B VLM and Llama 3.2 11B Vision Instruct with learning the classification guidelines outlined in our public Visual Moderation documentation. These guidelines were then used to evaluate a randomly selected sizable sample dataset of images from our proprietary Visual Moderation dataset, which has highly accurate hand-labeled ground truth classifications. This dataset also included diverse and challenging content types from each of our visual moderation heads, such as sexual intent, hate symbols and self harm. While Moderation 11B VLM’s performance demonstrates its ability to generalize well within the scope of these content classes, it is important to note that results may vary depending on the composition of external datasets
Expanding Moderation
With Moderation 11B VLM’s release, we hope to meaningfully and flexibly broaden the range of use cases our moderation tools can handle. We’re excited to see how this model assists with your moderation workflows, especially when navigating complex scenarios. Anyone with a Hive account can access our API playground here to try Moderation 11B VLM directly from the user interface.
Below are two examples of Moderation 11B VLM requests and responses.
For more details, please refer to the documentation here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
Hive is excited to announce that we have been awarded a Department of Defense (DoD) contract for deepfake detection of video, image, and audio content. This groundbreaking partnership marks a significant milestone in protecting our national security from the risks of synthetic media and AI-generated disinformation.
Combating Synthetic Media and Disinformation
Rapid strides in technology have made AI manipulation the weapon of choice for numerous adversarial entities. For the Department of Defense, a digital safeguard is necessary in order to protect the integrity of vital information systems and stay vigilant against the future spread of misinformation, threats, and conflicts at a national scale.
Hive’s reputation as frontline defenders against AI-generated deception makes us uniquely equipped to handle such threats. Not only do we understand the stakes at hand, we have been and continue to be committed to delivering unmatched detection tools that can mitigate these risks with accuracy and speed.
Under our initial two-year contract, Hive will partner with the Defense Innovation Unit (DIU) to support the intelligence community with our state-of-the-art deepfake detection models, deployed in an offline, on-premise environment and capable of detecting AI-generated video, image, and audio content. We are honored to join forces with the Department of Defense in this critical mission.
Our Cutting-Edge Tools
To best empower the U.S. defense forces against potential threats, we have provided five proprietary models that can detect whether an input is AI-generated or a deepfake.
If an input is flagged as AI-generated, it was likely created using a generative AI engine. Whereas, a deepfake is a real image or video where one or more of the faces in the original image has been swapped with another person’s face.
The models we’ve provided are, as follows:
AI-Generated Detection (Image and Video), which detects if an image or video is AI-generated.
AI-Generated Detection (Audio), which detects if an audio clip is AI-generated.
Deepfake Detection (Image), which detects if an image contains one or more faces that are deepfaked.
Deepfake Detection (Video), which detects if a video contains one or more faces that are deepfaked.
Liveness (Image and Video), which detects whether a face in an image or video is primary (exists in the primary image) or secondary (exists in an image, screen, or painting inside of the primary image).
Forging a Path Forward
Even as new threats continue to emerge and escalate, Hive continues to be steadfast in our commitment to provide the world’s most capable AI models for validating the safety and authenticity of digital content.
For more details, you can find our recent press release here and the DIU’s press release here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
Hive is excited to announce the groundbreaking integration of our proprietary AI models with NVIDIA NIM. Our collaboration will allow, for the first time, Hive customers to deploy our industry-leading AI models in private clouds and on-premises data centers. We are also announcing that for the remainder of 2024, internet social platforms can receive up to 90 days of free trial access to our models. To learn more, check out the press release here.
The first Hive models to be made available with NVIDIA NIM are our AI-generated content detection models, which allow customers to identify AI-generated images, video, and audio. However, we plan to make additional models available through NVIDIA NIM in the coming months, including content moderation, logo detection, optical character recognition, speech transcription, custom models through Hive’s AutoML platform, and more.
Secure and Accelerated Deployments with NIM
Short for NVIDIA Inference Microservices, NIM provides models as optimized containers to prospective customers. This enables organizations to run AI models on NVIDIA GPUs on private clouds, workstations, and on-premises. NVIDIA NIM is part of the NVIDIA AI Enterprise software platform and connects the power of the Hive’s proprietary AI models, securely deployed on NVIDIA’s accelerated infrastructure, with enterprise customers everywhere.
While Hive’s cloud-based APIs process billions of customer requests every month, among prospective customers’ top requests has been the ability to deploy Hive models in private clouds or on-premises. These are often enterprises whose strict data governance standards challenge the use of our cloud-based APIs. Our integration with NIM solves this challenge.
How Customers Use Our Leading AI Detection Models
Our AI-detection tools—the first Hive models to be made available with NVIDIA NIM—have been widely recognized as best-in-class, including by an independent research study from the University of Chicago. The researchers found that Hive’s model was the “clear winner” against both its automated competitors and highly-trained human experts in classifying images as either AI-generated or human-created.
With generative AI on the rise, Hive’s AI detection models have become crucial in combating the technology’s misuse. Here are select ways that customers use our models to protect themselves from the potential misuse of AI-generated and synthetic content.
Internet social platforms leverage our AI detection models to proactively screen content for the presence of AI-enabled misinformation in real time. Digital platforms can leverage our detections to provide transparency to their users by tagging content as AI-generated, or moderate potential misinformation by implementing sitewide bans.
Insurance companies use our models to automate the process of identifying AI-enabled fraud in evidence submitted with insurance claims. By scanning claims evidence for AI-generated augmentations, insurers can quickly, confidently and securely weed out fraud, saving significant cost from paying out fraudulent claims.
Banks, brokers, and other financial institutions use our AI-generated content detection models to secure their user identification verification and KYC processes, leveraging Hive’s industry-leading AI-generated audio detection model to verify voice recognition workflows and prevent sophisticated financial fraud.
Digital marketplaces use our models to automate the detection and moderation of fraudulent listings. Moreover, marketplaces protect their customers’ experience by verifying that both users and their product reviews are authentic.
Video conferencing and live streaming platforms integrate our AI detection models to authenticate video and audio in real time, preventing both impersonation and the misuse of likenesses.
While not all-encompassing, these are select ways that customers use our models today.
Managing the Risks of Generative AI
The increasing accessibility of Generative AI tools poses a newfound set of risks to companies and organizations. It can be difficult to moderate the proliferation of AI-generated content in a scalable, automated and secure way. We are proud to provide a solution that supports our customers in managing these risks, now made more accessible for enterprises to deploy on-premises or in private clouds with NVIDIA NIM.
If you’re interested in accessing Hive’s AI models through NVIDIA NIM, you can learn more on our website here or on NVIDIA’s website here. If you have any questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.