Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Text Generation
Multimodal Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
NVIDIA NIM
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive joins other leading technology companies and trade organizations in endorsing the NO FAKES Act — a bipartisan piece of legislation aimed at addressing the misuse of generative AI technologies by bad actors.
We at Hive are excited to share a new report we’ve written on the state of the deepfake—covering its evolving trends, threats, and beyond.
Hive has released Moderation 11B Vision Language Model, which offers a powerful way to handle flexible and context-dependent moderation scenarios.
Hive announces that we have been awarded a landmark Department of Defense (DoD) contract for deepfake content detection.
The US Department of Defense is investing in deepfake detection
Hive Secures DoD Contract for Deepfake Detection, Pioneering AI Defense Against Emerging Threats
Defense Innovation Unit and DoD Collaborate To Strengthen Synthetic Media Detection Capabilities