BACK TO ALL BLOGS Hive Adds Thorn’s Grooming Detection to CSE Text Classifier API HiveApril 7, 2026April 7, 2026 Children spend more time online than ever before, using digital platforms to talk, play, and learn. But the same platforms that serve as creative and social spaces can also create opportunities for manipulation and abuse. To protect young users, platforms need reliable tools that can help identify unsafe activity and intervene before incidents intensify. That is why today, in continuation with our partnership with Thorn, we’re expanding our detection offerings by adding Thorn’s grooming label to the CSE (Child Sexual Exploitation) Text Classifier API. Developed by Thorn’s child safety experts, this classifier detects online behaviors that may indicate child sexual exploitation or abuse of a minor, granting trust and safety teams more visibility into the occurrence of these harmful interactions. Addressing Grooming at Its Earliest Stages Grooming is one of the fastest-growing online threats to children. It often starts quietly through manipulation, pressure, or emotional control. These interactions can lead to self-generated CSAM, attempts at offline contact, and serious harm. Because these interactions can unfold quickly, often in 1:1 chats and across large volumes of conversations, effective child safety depends on technology that can identify risk at scale. This level of detection is critical, because recognizing grooming early creates an opportunity to interrupt the interaction before it escalates. How This Detection Works Grooming classification analyzes text in both English and Spanish to identify when grooming indicators appear in an online conversation. Grooming activity often shows up in text through behaviors like: Soliciting production or exchange of CSAMPlanning or suggesting offline meetingsEncouraging self-harm or emotional dependencyDeveloping romantic or sexual relationships with minorsSoliciting personal information or imagesMoving conversations to less moderated platforms When the classification model detects these signals, it assigns a grooming label along with a confidence score, helping teams distinguish grooming from other violations and prioritize the messages that require the quickest review. Our Work With Thorn Hive and Thorn are continuously expanding our partnership to bring purpose-built child sexual abuse and exploitation detection technology to platforms worldwide. By combining Thorn’s machine learning classification models and hashing capabilities with Hive’s enterprise-grade APIs, trust and safety teams gain accurate, scalable detection across text, images and video. Together, we help platforms uncover both known and previously unknown child sexual abuse material and strengthen proactive protections against online child sexual exploitation. If you have further questions or would like to learn more, please reach out to sales@thehive.ai or visit our page here.