Why Watermarks Are No Longer The Sole Trusted Source To Detect AI-Generated Content

As AI-generated content becomes smarter and more realistic, the question of how to tell what is real becomes increasingly difficult to answer. Watermarks are often used as a telltale sign that clearly identifies AI-generated media, giving us confidence in what we are seeing. However, in reality, they are far less reliable than they appear (or […]

January 12, 2026

More stories

Expanding Our CSAM Detection API

Hive is now offering Thorn’s predictive technology through our CSAM detection API, enabling customers to identify novel cases of CSAM.

Hive
 | 
November 21, 2024
Ars Technica

Child safety org flags new CSAM with AI trained on real child sex abuse images

Hive
 | 
November 21, 2024
Semafor

AI deepfakes should be a top concern for global intelligence agencies, Hive CEO says

Hive
 | 
October 30, 2024
Announcing General Availability of Hive Models

Hive announces that select Hive models and popular open-source models are now directly accessible for customers to deploy and integrate into their workflows.

Hive
 | 
October 4, 2024
Announcing Hive’s Integration with NVIDIA NIM

Hive is excited to announce the integration of our AI models with NVIDIA NIM, allowing customers to deploy our industry-leading models in private clouds and on-premises.

Hive
 | 
September 23, 2024
Business Wire

Hive to Accelerate AI Adoption in Private Clouds and On-Prem Environments Using NVIDIA NIM

Hive
 | 
September 23, 2024
bg-image

Ready to build something?