Why Watermarks Are No Longer The Sole Trusted Source To Detect AI-Generated Content

As AI-generated content becomes smarter and more realistic, the question of how to tell what is real becomes increasingly difficult to answer. Watermarks are often used as a telltale sign that clearly identifies AI-generated media, giving us confidence in what we are seeing. However, in reality, they are far less reliable than they appear (or […]

January 12, 2026

More stories

Expanding our Moderation APIs with Hive’s New Vision Language Model

Hive has released Moderation 11B Vision Language Model, which offers a powerful way to handle flexible and context-dependent moderation scenarios.

Hive
 | 
December 23, 2024
Announcing Hive’s Partnership with the Defense Innovation Unit

Hive announces that we have been awarded a landmark Department of Defense (DoD) contract for deepfake content detection.

Hive
 | 
December 5, 2024
MIT Technology Review

The US Department of Defense is investing in deepfake detection

Hive
 | 
December 5, 2024
Business Wire

Hive Secures DoD Contract for Deepfake Detection, Pioneering AI Defense Against Emerging Threats

Hive
 | 
December 5, 2024
Defense Innovation Unit

Defense Innovation Unit and DoD Collaborate To Strengthen Synthetic Media Detection Capabilities

Hive
 | 
December 4, 2024
Model Explainability With Text Moderation

Announcing Hive’s new API: Text Moderation Explanations, which helps customers understand why our Text Moderation model assigns text strings particular scores.

Hive
 | 
December 2, 2024
bg-image

Ready to build something?