Moderate - Trust & Safety
Visual Moderation
Text Moderation
Audio Moderation
CSAM Detection
Demographic Attributes
Detect Objects & Scenes
OCR
Contextual Scene Classification
Detect AI Content
AI-Generated Content Classification
Detect People & Identity
Likeness Detection
Logo & Logo Location
Celebrity Recognition
Generate
Image Generation
Video Generation
Hive Vision Language Model
Translate
Speech-to-Text
Translation
Search
Custom Search
Reverse Image Search
Media Search
Contextual Search
NFT Search
Platform
Custom Training - AutoML
Moderation Review Tool
Technology & Digital Platforms
For Online Communities
For Streaming Platforms
For Marketplaces
For Generative AI Apps
For Gaming Companies
For Dating Apps
Sports, Media, & Marketing
For Brands
For Publishers
For Agencies
For Teams and Leagues
Risk & Identity Management
For U.S. Government Agencies
For Insurance Companies
For Financial Services
Use Cases
Content Moderation
Sponsorship Intelligence
Ad Intelligence
Context-Based Ad Targeting
Careers
About Us
Back
Hive’s partnership with Thorn is expanding to include a new CSE Text Classifier API, which can help trust and safety teams proactively combat text-based child sexual exploitation at scale.
Why content moderation costs billions and is so tricky for Facebook, Twitter, YouTube and others
Chatroulette Is On the Rise Again — With Help From AI
Social media platforms increasingly play a pivotal role in both spreading and combating hate speech and discrimination today. Now integrated into Hive’s content moderation suite, Hive’s hate model enables more proactive and comprehensive visual and textual moderation of hate speech online.
Newly Unemployed, and Labeling Photos for Pennies
During the COVID-19 pandemic, Hive is using AI and its distributed workforce to help social media platforms manage emergent content moderation needs.
Hive’s improved automated content moderation suite with additional subclasses; now performs better than human moderators.