Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Hive Vision Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For U.S. Government Agencies
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive’s partnership with Thorn is expanding to include a new CSE Text Classifier API, which can help trust and safety teams proactively combat text-based child sexual exploitation at scale.
Follow along as we provide a step-by-step guide to building a machine learning model with Hive AutoML.
Hive provides an in-depth comparison of all of our products against top competitors.
How Easy Is It to Fool A.I.-Detection Tools?
Hive presented at CVPR last week, where we highlighted a few important considerations when building ML models for classification tasks.
Hive announces a new AutoML tool to train, evaluate, and deploy customized machine learning models.
AI-Created Images Are So Good Even AI Has Trouble Spotting Some