Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Hive Vision Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For U.S. Government Agencies
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive’s partnership with Thorn is expanding to include a new CSE Text Classifier API, which can help trust and safety teams proactively combat text-based child sexual exploitation at scale.
Seamlessly incorporate your own custom classes and to our industry-leading moderation models with Hive AutoML.
How AI Fakes May Harm Your Business–and What This Founder Is Doing to Help
From Hive’s CEO, Kevin Guo, How Does The Company Appeal To New Partnerships?
Follow along as we provide a step-by-step guide to building a machine learning model with Hive AutoML.
Hive provides an in-depth comparison of all of our products against top competitors.
Hive is proud to announce that we’ve achieved both ISO 27001:2022 and SOC Type 2 certifications.