BACK TO ALL BLOGS

Matching Against CSAM: Hive’s Innovative Integration with Thorn’s Safer Match

Hive's Innovative Integration with Thorn's Safer Match

An image of the Hive and Thorn logos side by side

We are excited to announce that Hive’s Partnership with Thorn is now live! Our current and prospective customers can now easily integrate Thorn’s Safer Match, a CSAM (child sexual abuse material) detection solution, using Hive’s APIs.

The Danger of CSAM

The threat of CSAM involves the production, distribution, and possession of explicit images and videos depicting minors. Every platform with an upload button or messaging capabilities is at risk of hosting child sexual abuse material (CSAM). In fact, in 2023 alone, there were over 104 million reports of potential CSAM reported to the National Center of Missing and Exploited Children.

The current state-of-the-art approach is to use an encrypting function to “hash” the content and then “match” it against a database aggregating 57+ million verified CSAM hashes. If the content hash matches against the database, then the content can be flagged as CSAM.

How the Integration Works 

When presented with visual content, we first hash it, then match it against known instances of CSAM.

  1. Hashing: We take the submitted image or video, and convert it into one or more hashes.
  2. Deletion: We then immediately delete the submitted content ensuring nothing stays on Hive’s servers.
  3. Matching: We match the hashes against the CSAM database and return whether the hashes match or not to you.

Hive’s partnership with Thorn allows our customers to easily incorporate Thorn’s Safer Match into their detection toolset. Safer Match provides programmatic identification of known CSAM with cryptographic and perceptual hash matching for images and for videos, through proprietary scene-sensitive video hashing (SSVH).

How you can use this API today:

First, talk to your Hive sales rep, and get an API key and credentials for your new endpoint.

Image

For an image, simply send the image to us, and we will hash it using MD5 and Safer encryption algorithms. Once the image is hashed, we return the results in our output JSON.

Video

You can also send videos into the API. We use MD5 hashes and Safer’s proprietary perceptual hashing  for videos as well. However, they have different use cases. MD5 will return exact match videos and will only indicate whether the whole video is a known CSAM video.

Additionally, Safer will hash different scenes within the video and will flag those which are known to be violating. Safer scenes are demarcated by a start and end timestamp as shown in the response below. 

Note: For the Safer SSVH, videos are sampled at 1FPS.

How to Hive processes media to match against Thorn's classifier and the format of the response

For more information, you can reference our documents.

Teaming Up For a Safer Internet

CSAM is one of the most pervasive and harmful issues on the internet today. Legal requirements make this problem even harder to tackle, and previous technical solutions required significant integration efforts. But, together with Thorn’s proactive technology, we can respond to this challenge and help make the internet a safer place for everyone.