Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Hive Vision Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For U.S. Government Agencies
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive’s partnership with Thorn is expanding to include a new CSE Text Classifier API, which can help trust and safety teams proactively combat text-based child sexual exploitation at scale.
Trump ‘TRUTH Social’ developing content moderation practices to ensure ‘family-friendly’ community
Donald Trump’s ‘Free Speech’ Site Will Use Big-Tech Artificial Intelligence to Censor Posts
Social app Parler is cracking down on hate speech — but only on iPhones
A ‘PG’ Version of Parler Returns to the iTunes App Store
Earlier today, The Washington Post published a feature detailing Hive’s work with social network Parler, and the role our content moderation solutions have played in protecting their community from harmful content and, as a result, earning their app reinstatement in Apple’s App Store.
Hive’s CEO, Kevin Guo, was featured on TODAY discussing the role of AI in content moderation