Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Multimodal Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For U.S. Government Agencies
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive’s partnership with Thorn is expanding to include a new CSE Text Classifier API, which can help trust and safety teams proactively combat text-based child sexual exploitation at scale.
Hive is proud to announce that we are partnering with Internet Watch Foundation (IWF), a non-profit organization working to stop child sexual abuse online.
This $2 Billion Content Moderation Company Is Trying To Stop AI Images Of Child Sexual Abuse
We at Hive are excited to share a new report we’ve written on the state of the deepfake—covering its evolving trends, threats, and beyond.
Hive has released Moderation 11B Vision Language Model, which offers a powerful way to handle flexible and context-dependent moderation scenarios.
Hive announces that we have been awarded a landmark Department of Defense (DoD) contract for deepfake content detection.
The US Department of Defense is investing in deepfake detection