{"id":494,"date":"2020-06-25T07:05:00","date_gmt":"2020-06-25T07:05:00","guid":{"rendered":"https:\/\/thehive.ai\/blog\/?p=494"},"modified":"2024-07-04T16:55:19","modified_gmt":"2024-07-04T16:55:19","slug":"hive-hate-model-automated-content-moderation-suite","status":"publish","type":"post","link":"https:\/\/thehive.ai\/blog\/hive-hate-model-automated-content-moderation-suite","title":{"rendered":"Hive Adds Hate Model to Fully-Automated Content Moderation Suite"},"content":{"rendered":"\n<p>Social media platforms increasingly play a pivotal role in both spreading and combating hate speech and discrimination today. Now integrated into Hive\u2019s content moderation suite, Hive\u2019s hate model enables more proactive and comprehensive visual and textual moderation of hate speech online.<\/p>\n\n\n\n<p>Year over year, our <a href=\"https:\/\/hivemoderation.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">content moderation suite<\/a> has emerged as the preeminent AI-powered solution to both help platforms keep their environments protected from harmful content, and to dramatically reduce the exposure of human moderators to sensitive content. Hive\u2019s content moderation models have consistently and significantly <a href=\"http:\/\/thehive.ai\/blog\/updated-best-in-class-automated-content-moderation-model\" target=\"_blank\" rel=\"noreferrer noopener\">outperformed comparable models<\/a>, and we are proud to currently work with more than 30 of the world\u2019s largest and fastest-growing social networks and digital video platforms.<\/p>\n\n\n\n<p>Today we are excited to officially integrate our hate model into our content moderation product suite, helping our current and future clients combat racism and hate speech online. We believe that blending our best-in-class models with the significant scale of our clients\u2019 platforms can result in real step-change impact.<\/p>\n\n\n\n<p>Detecting hate speech is a unique challenge that is dynamic and evolving rapidly. Context and subtle nuances vary widely across cultures, languages, and regions. Additionally, hate speech itself isn\u2019t always explicit. Models must be able to recognize subtleties quickly and proactively. Hive is committed to taking on that challenge and, over the past months, we have partnered with several of our clients to ready our hate model for today\u2019s launch.<\/p>\n\n\n\n<h2>How We Help<\/h2>\n\n\n\n<p>Hate speech can occur both visually and textually with a large percentage occurring in photos and videos. Powered by our distributed global workforce of more than 2 million registered contributors, Hive\u2019s hate model is trained on more than 25 million human judgments and supports both visual classification models and text moderation models.<\/p>\n\n\n\n<p>Our visual classification models classify entire images into different categories by assigning a confidence score for each class. These models can be multi-headed, where each group of mutually exclusive model classes belongs to a single model head. Within our hate model, some examples of heads include the Nazi and KKK symbols, and other terrorist or white supremacist propaganda. Results from our model are actioned according to platform rules. Many posts are automatically actioned as safe or restricted; others are routed for manual review of edge cases where a symbol may be present but not in a prohibited use. Our visual hate models will typically achieve &gt;98% recall and a &lt;0.1% false positive rate. View our full documentation <a href=\"https:\/\/docs.thehive.ai\/docs\/visual-content-moderation#hate\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n\n\n\n<p>Our text content moderation model is a multi-head classifier that will now include hate speech. This model automatically detects \u201chateful language\u201d &#8211; defined, with input from our clients, as any language, expression, writing, or speech that expresses \/ incites violence against, attacks, degrades, or insults a particular group or an individual in a particular group. These specific groups are based on protected attributes such as race, ethnicity, national origin, gender, sex, sexual orientation, disability, and religion. Hateful language includes but is not limited to hate speech, hateful ideology, racial \/ ethnic slurs, and racism. View our full documentation <a href=\"https:\/\/docs.thehive.ai\/docs\/classification-text\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n\n\n\n<p>We are also breaking ground on solving the particularly challenging problem of multimodal relationships between the visual and textual content, and expect to be adding multi-modal capabilities over the next weeks. Multimodal learning allows our models to understand the relationship between both text and visual content in the same setting. This type of learning is important to better understand the meaning of language and the context in which it is used. Accurate multimodal systems can avoid flagging cases where the visual content on its own may be considered hateful, but the presence of counterspeech text \u2014 where individuals speak out against the hateful content \u2014 negates the hateful signal in the visual content. Similarly, multimodal systems can help flag cases where the visual and textual content independently are not considered to be hateful, but in the context of one another are in fact hateful, such as hateful memes. Over time, we expect this capability to further reduce the need for human reviews of edge cases.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"1024\" height=\"471\" src=\"https:\/\/staticblog.thehive.ai\/uploads\/2024\/07\/figure1-1-1024x471.jpg\" alt=\"\" class=\"wp-image-545\" srcset=\"https:\/\/staticblog.thehive.ai\/uploads\/2024\/07\/figure1-1-1024x471.jpg 1024w, https:\/\/staticblog.thehive.ai\/uploads\/2024\/07\/figure1-1-300x138.jpg 300w, https:\/\/staticblog.thehive.ai\/uploads\/2024\/07\/figure1-1-768x353.jpg 768w, https:\/\/staticblog.thehive.ai\/uploads\/2024\/07\/figure1-1-1536x706.jpg 1536w, https:\/\/staticblog.thehive.ai\/uploads\/2024\/07\/figure1-1.jpg 1888w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2>What\u2019s Next?<\/h2>\n\n\n\n<p>Today\u2019s release is a milestone we are proud of, but merely the first step in a multi-year commitment to helping platforms filter hate speech from their environments. We will continue to expand and enhance model classification with further input from additional moderation clients and industry groups.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Social media platforms increasingly play a pivotal role in both spreading and combating hate speech and discrimination today. Now integrated into Hive\u2019s content moderation suite, Hive\u2019s hate model enables more proactive and comprehensive visual and textual moderation of hate speech online.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"kia_subtitle":""},"categories":[8,4],"tags":[],"_links":{"self":[{"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/posts\/494"}],"collection":[{"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/comments?post=494"}],"version-history":[{"count":3,"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/posts\/494\/revisions"}],"predecessor-version":[{"id":555,"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/posts\/494\/revisions\/555"}],"wp:attachment":[{"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/media?parent=494"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/categories?post=494"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thehive.ai\/blog\/wp-json\/wp\/v2\/tags?post=494"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}