Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Text Generation
Multimodal Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
NVIDIA NIM
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive joins other leading technology companies and trade organizations in endorsing the NO FAKES Act — a bipartisan piece of legislation aimed at addressing the misuse of generative AI technologies by bad actors.
Blackburn, Coons, Salazar, Dean, Colleagues Introduce “NO FAKES Act” to Protect Individuals and Creators from Digital Replicas
Hive has integrated NCMEC’s CyberTipline into Moderation Dashboard, streamlining the process of submitting child sexual abuse material (CSAM) reports.
Next-day insights on the latest trends in marketing and culture, powered by Hive’s AI models.
We are thrilled to announce that Hive is the lead sponsor of the Trust & Safety Summit 2025, Europe’s premier conference for T&S leaders.
Hive is proud to announce that we are partnering with Internet Watch Foundation (IWF), a non-profit organization working to stop child sexual abuse online.
This $2 Billion Content Moderation Company Is Trying To Stop AI Images Of Child Sexual Abuse