Why Watermarks Are No Longer The Sole Trusted Source To Detect AI-Generated Content

As AI-generated content becomes smarter and more realistic, the question of how to tell what is real becomes increasingly difficult to answer. Watermarks are often used as a telltale sign that clearly identifies AI-generated media, giving us confidence in what we are seeing. However, in reality, they are far less reliable than they appear (or […]

January 12, 2026

More stories

The Wall Street Journal

AI-Created Images Are So Good Even AI Has Trouble Spotting Some

Hive
 | 
April 11, 2023
Flag AI-Generated Text with Hive’s New Classifier

Hive announces a new AI-Generated Text Detector that can differentiate between human-written and AI-generated text with high accuracy.

Hive
 | 
February 1, 2023
Financial Times

Can Big Tech make livestreams safe?

Hive
 | 
January 22, 2023
PR Newswire

Yubo scales real-time audio moderation technology across four major international markets

Hive
 | 
November 16, 2022
Spot Deepfakes With Hive’s New Deepfake Detection API

Hive announces a new Deepfake Detection API — a powerful tool that allows digital platforms to easily identify and moderate realistic synthetic images and video.

Hive
 | 
November 2, 2022
Detect and Moderate AI-Generated Artwork Using Hive’s New API

Hive announces a new API to aid in the moderation of AI-generated media. This powerful classification model identifies both AI artwork and its source engine.

Hive
 | 
September 23, 2022
bg-image

Ready to build something?