Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Text Generation
Multimodal Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
NVIDIA NIM
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive has integrated NCMEC’s CyberTipline into Moderation Dashboard, streamlining the process of submitting child sexual abuse material (CSAM) reports.
Next-day insights on the latest trends in marketing and culture, powered by Hive’s AI models.
We are thrilled to announce that Hive is the lead sponsor of the Trust & Safety Summit 2025, Europe’s premier conference for T&S leaders.
Hive is proud to announce that we are partnering with Internet Watch Foundation (IWF), a non-profit organization working to stop child sexual abuse online.
This $2 Billion Content Moderation Company Is Trying To Stop AI Images Of Child Sexual Abuse
We at Hive are excited to share a new report we’ve written on the state of the deepfake—covering its evolving trends, threats, and beyond.
Hive has released Moderation 11B Vision Language Model, which offers a powerful way to handle flexible and context-dependent moderation scenarios.