BACK TO ALL BLOGS

Hive on Inc.

BACK TO ALL BLOGS

Deep Learning Methods for Moderating Harmful Viral Content

Contents

Content Moderation Challenges in the Aftermath of Buffalo

The racially-motivated shooting in a Buffalo supermarket – live streamed by the perpetrator and shared across social media – is tragic on many levels.  Above all else, lives were lost and families are forever broken as a result of this horrific attack.  Making matters worse, copies of the violent recording are spreading on major social platforms, amplifying extremist messages and providing a blueprint for future attacks.

Unfortunately, this is not a new problem: extremist videos and other graphic content have been widely shared for shock value in the past, with little regard for the negative impacts. And bad actors are more sophisticated than ever, uploading altered or manipulated versions to thwart moderation systems.

As the world grapples with broader questions of racism and violence, we’ve been working with our partners behind the scenes to help control the spread of this and other harmful video content in their online communities.  This post covers the concerns these partners have raised with legacy moderation approaches, and how newer technology can be more effective in keeping communities safe. 

Conventional Moderation and Copy Detection Approaches

Historically, platforms relied on a combination of user reporting and human moderation to identify and react to harmful content. Once the flagged content reaches a human moderator, enforcement is usually quick and highly accurate. 

But this approach does not scale for platforms with millions (or billions) of users.  It can take hours to identify and act on an issue, especially in the aftermath of a major news event when post activity is highest.  And it isn’t always the case that users will catch bad content quickly: when the Christchurch massacre was live streamed in 2019, it was not reported until 12 minutes after the stream ended, allowing the full video to spread widely across the web.

More recently, platforms have found success using cryptographic hashes of the original video to automatically compare against newly posted videos.  These filters can quickly and proactively screen high volumes of content, but are generally limited to detecting copies of the same video. Hashing checks often miss content if there are changes to file formats, resolutions, and codecs. And even the most advanced “perceptual” hashing comparisons – which preprocess image data in order to consider more abstract features – can be defeated by adversarial augmentations.  

Deep Learning To Advance Video Moderation and Contain Viral Content

Deep learning models can close the moderation capability gap for platforms in multiple ways. 

First, visual classifier models can proactively monitor live or prerecorded video for indicators of violence.  These model predictions enable platforms to shut down or remove content in real-time, preventing the publishing and distribution of videos that break policies in the first place.  The visual classifiers can look for combinations of factors, such as someone holding a gun, bodily injury, blood, and other object or scene information to create automated and nuanced enforcement mechanisms. Specialized training techniques can also accurately teach visual classifiers to identify the difference ​​between real violence and photorealistic violence depicted in video games, so that something like a first-person shooter game walkthrough is not mistaken for an real violent event.

In addition to screening using visual classifiers, platforms can harness new types of similarity models to stop reposts of videos confirmed to be harmful, even if those videos are adversarially altered or manipulated. If modified versions somehow bypass visual classification filters, these models can catch these videos based on visual similarity to the original version.   

In these cases, self-supervised training techniques expose the models to a range of image augmentation and manipulation methods, enabling them to accurately assess human perceptual similarity between image-based content. These visual similarity models can detect duplicates and close copies of the original image or video, including more heavily modified versions that would otherwise go undetected by hashing comparisons.

Unlike visual classifiers, these models do not look for specific visual subject matter in their analysis.  Instead, they quantify visual similarity on a spectrum based on overlap between abstract structural features. This means there’s no need to produce training data to optimize the model for every possible scenario or type of harmful content; detecting copies and modified versions of known content simply requires that the model accurately assess whether images or video come from the same source.

How it works: Deep Learning Models in Automated Content Moderation Systems

Using predictions from these deep learning models as a real-time signal offers a powerful way to proactively screen video content at scale. These model results can inform automated enforcement decisions or triage potentially harmful videos for human review. 

Advanced visual classification models can accurately distinguish between real and photorealistic animated weapons. Here are results from video frames containing both animated and real guns. 

To flag real graphic violence, automated moderation logic could combine confidence scores in actively held weapons, blood, and/or corpse classes but exclude more benign images like these examples. 

As a second line of defense, platforms need to be able to detect reposts or modified versions of known harmful videos from spreading.  To do this, platforms use predictions from pre-trained visual similarity models in the same way they use hash comparisons today. With an original version stored as a reference, automated moderation systems can perform a frame-wise comparison with any newly posted videos, flagging or removing new content that scores above a certain similarity threshold.  

In these examples, visual similarity models accurately predict that frame(s) in the query video are derived from the original reference, even under heavy augmentation. By screening new uploads against video content known to be graphic, violent, or otherwise harmful, these moderation systems can replace incomplete tools like hashing and audio comparison to more comprehensively solve the harmful content detection problem.

Final Thoughts: How Hive Can Help

No amount of technology can undo the harm caused by violent extremism in Buffalo or elsewhere.  We can, however, use new technology to mitigate the immediate and future harms of allowing hate-based violence to be spread in our online communities. 

Hive is proud to support the world’s largest and most diverse platforms in fulfilling their obligation to keep online communities safe, vibrant, and hopeful. We will continue to contribute towards state-of-the-art moderation solutions, and can answer questions or offer guidance to Trust & Safety teams who share our mission at support@thehive.ai.

BACK TO ALL BLOGS

Introducing Moderation Dashboard: a streamlined interface for content moderation

Over the past few years, Hive’s cloud-based APIs for moderating image, videotext, and audio content have been adopted by hundreds of content platforms, from small communities to the world’s largest and most well-known platforms like Reddit.  

However, not every platform has the resources or interest in building their own software on top of Hive’s APIs to manage their internal moderation workflows.  And since the need for software like this is shared by many platforms, it made sense to build a robust, accessible solution to fill the gap.

Today, we’re announcing the Moderation Dashboard, a no-code interface for your Trust & Safety team to design and execute custom-built moderation workflows on top of Hive’s best-in-class AI models.  For the first time, platforms can access a full-stack, turnkey content moderation solution that’s deployable in hours and accessible via an all-in-one flexible seat-based subscription model.

We’ve spent the last month beta testing the Moderation Dashboard and have received overwhelmingly positive feedback.  Here are a few highlights:

  • “Super simple integration”: customizable actions define how the Moderation Dashboard communicates with your platform
  • “Effortless enforcement”: automating moderation rules in the Moderation Dashboard UI requires zero internal development effort
  • “Streamlined human reviews”: granular policy enforcement settings for borderline content significantly reduced need for human intervention
  • “Flexible” and “Scalable”: easy to add seat licenses as your content or team needs grow, with a stable monthly fee you can plan for

We’re excited by the Moderation Dashboard’s potential to bring industry-leading moderation to more platforms that need it, and look forward to continuing to improve it with updates and new features based on your feedback.

If you want to learn more, the post below highlights how our favorite features work.  You can also read additional technical documentation here.

Easily Connect Moderation Dashboard to Your Application

Moderation Dashboard connects seamlessly to your application’s APIs, allowing you to create custom enforcement actions that can be triggered on posts or users – either manually by a moderator or automatically if content matches your defined rules.

You can create actions within the Moderation Dashboard interface specifying callback URLs that tell the Dashboard API how to communicate with your platform.  When an action triggers, the Moderation Dashboard will ping your callback server with the required metadata so that you can successfully execute the action on the correct user or post within your platform.

Implement Custom Content Moderation Rules

At Hive, we understand that platforms have different content policies and community guidelines. Moderation Dashboard enables you to set up custom rules according to your particular content policies in order to automatically take action on problematic content using Hive model results. 

Moderation Dashboard currently supports access to both our visual moderation model and our text moderation model – you can configure which of over 50 model classes to use for moderation and at what level directly through the dashboard interface. You can easily define sets of classification conditions and specify which of your actions – such as removing a post or banning a user – to take in response, all from within the Moderation Dashboard UI. 

Once configured, Moderation Dashboard can communicate directly with your platform to implement the moderation policy laid out in your rule set. The Dashboard API will automatically trigger the enforcement actions you’ve specified on any submitted content that violates these rules.

Another feature unique to Moderation Dashboard: we keep track of (anonymized) user identifiers to give you insight into high-risk users. You can design rules that account for a user’s post history to take automatic action on problematic users. For example, platforms can identify and ban users with a certain number of flagged posts in a set time period, or with a certain proportion of flagged posts relative to clean content – all according to rules you set in the interface.

Intuitive Adjustment of Model Classification Thresholds

Moderation Dashboard allows you to configure model classification thresholds directly within the interface. You can easily set confidence score cutoffs (for visual) and severity score cutoffs (for text) that tells Hive how to classify content according to your sensitivity around precision and recall.

Streamline Human Review

Hive’s API solutions were generally designed with an eye towards automated content moderation. Historically, this has required our customers to expend some internal development effort to build tools that also allow for human review. Moderation Dashboard closes this loop by allowing custom rules that route certain content to a Review Feed accessible by your human moderation team.

One workflow we expect to see frequently: automating moderation of content that our models classify as clearly harmful, while sending posts with less confident model results to human review. By limiting human review to borderline content and edge cases, platforms can significantly reduce the burden on moderators while also protecting them from viewing the worst content.

Setting Human Review Thresholds

To do this, Moderation Dashboard administrators can set custom score ranges that trigger human review for both visual and text moderation. Content scoring in these ranges will be automatically diverted to the Review Feed for human confirmation. This way, you can focus review from your moderation team on trickier cases, while leaving content that is clearly allowable and clearly harmful to your automated rules. Here’s an example rule that sends text content scored as “controversial” (severity scores of 1 or 2) to the review feed but auto-moderates the most severe cases.

Review Feed Interface for Human Moderators

When your human review rules trigger, Moderation Dashboard will route the post to the Review Feed of one of your moderators, where they can quickly visualize the post and see Hive model predictions to inform a final decision.

For each post, your moderators can select from the moderation actions you’ve set up to implement your content policy. Moderation Dashboard will then ping your callback server with the required information to execute that action, enabling your moderators to take quick action directly within the interface.

Additionally, Moderation Dashboard makes it simple for your Trust & Safety team administrators to onboard and grant review access to additional moderators. Platforms can easily scale their content moderation capabilities to keep up with growth.

Access Clear Intel on Your Content and Users

Beyond individual posts, Moderation Dashboard includes a User Feed that allows your moderators to see detailed post histories of each user that has submitted unsafe content. 

Here, your moderators can access an overview of each user including their total number of posts and the proportion of those posts that triggered your moderation rules. The User Feed also shows each of that user’s posts along with corresponding moderation categories and any corresponding action taken. 

Similarly, Moderation Dashboard makes quality control easy with a Content Feed that displays all posts moderated automatically or through human review. The Content Feed allows you to see your moderation rules in action, including detailed metrics on how Hive models classified each post. From here, administrators supervise human moderation teams for simple QA or further refine thresholds for automated moderation rules.

Effortless Moderation of Spam and Promotions

In addition to model classifications, Moderation Dashboard will also filter incoming text for spam entities – including URLs and personal information such as emails and phone numbers. The Spam Manager interface will aggregate all posts containing the same spam text into a single action item that can be allowed or denied with one click.

With Spam Manager, administrators can also define custom whitelists and blacklists for specific domains and URLs and then set up rules to automatically moderate spam entities in these lists. Finally, Spam Manager provides detailed histories of users that post spam entities for quick identification of bots and promotional accounts, making it easy to keep your platform free of junk content. 

Final Thoughts: The Future of Content Moderation

We’re optimistic that Moderation Dashboard can help platforms of all sizes meet their obligations to keep online environments safe and inclusive. With Moderation Dashboard as a supplement to (or replacement for) internal moderation infrastructure, it’s never been easier for our customers to leverage our top-performing AI models to automate their content policies and increase efficiency of human review. 

Moderation Dashboard is an exciting shift in how we deliver our AI solutions, and this is just the beginning. We’ll be quickly adding additional features and functionality based on customer feedback, so please stay tuned for future announcements.

If you’d like to learn more about Moderation Dashboard or schedule a personal demo, please feel free to contact sales@thehive.ai

BACK TO ALL BLOGS

OCR Moderation with Hive: New Approaches to Online Content Moderation

Recently, image-based content featuring embedded text – such as memes, captioned images and GIFs, and screenshots of text – have exploded in popularity across many social platforms. These types of content can present unique challenges for automated moderation tools. Not only does embedded text need to be detected and ordered accurately, it also must be analyzed with contextual awareness and attention to semantic nuance. 

Emojis have historically been another obstacle for automated moderation. Thanks to native support across many devices and platforms, these characters have evolved into a new online lexicon for accentuating or replacing text. Many emojis have also developed connotations that are well-understood by humans but not directly related to the image itself, which can make it difficult for automated solutions to identify harmful or inappropriate text content.

To help platforms tackle these challenges, Hive offers optical character recognition (OCR)-based moderation as part of our content moderation suite. Our OCR models are optimized for the types of digitally-generated content that commonly appears on social platforms, enabling robust AI moderation on content forms that are widespread yet overlooked by other solutions. Our OCR moderation API combines competitive text detection and transcription capabilities with our best-in-class text moderation model (including emoji support) into a single response, making it easy for platforms to take real-time enforcement actions across these popular content formats. 

OCR Model for Text Recognition

Effective OCR moderation starts with training for accurate text detection and extraction. Hive’s OCR model is trained on a large, proprietary set of examples that optimizes for how text commonly appears within user-generated digital content. Hive has the largest distributed workforce for data labeling in the world, and we leaned on this capability to provide tens of millions of human annotations on these examples to build our model’s understanding. 

We recently conducted a head-to-head comparison of our OCR model against top public cloud solutions using a custom evaluation dataset sourced from social platforms. We were particularly interested in test examples that featured digitally-generated text – such as memes and captioned images – to capture how content commonly appears on social platforms and selected evaluation data accordingly. 

In this evaluation, we looked at end-to-end text recognition, which includes both text detection and text transcription. Here, Hive’s OCR model outperformed or was competitive with other models on both exact transcription and transcription allowing character-level errors. At 90% recall, Hive’s OCR model achieved a precision of 98%, while public cloud models ranged from ~88% to 97%, implying a similar or lower end-to-end error rate.

OCR Moderation: Language Support

We recognize that many platforms’ moderation needs extend beyond English-speaking users. Hive’s OCR model supports text recognition and transcription for many widely spoken languages with comparable performance, many of which are also supported by our text moderation solutions. Here’s an overview of our current language support:

LanguageOCR Support?Text Moderation Support?
EnglishYesYes (Model)
SpanishYesYes (Model)
FrenchYesYes (Model)
GermanYesYes (Model)
MandarinYesYes (Pattern Match)
RussianYesYes (Pattern Match)
PortugueseYesYes (Model)
ArabicYesYes (Model)
KoreanYesYes (Pattern Match)
JapaneseYesYes (Pattern Match)
HindiYesYes (Model)
ItalianYesYes (Pattern Match)

Moderation of Detected Text

Hive’s OCR moderation solution goes beyond producing a transcript – we then apply our best-in-class text moderation model to understand the meaning of that speech in context (including any detected emojis). Our backend will automatically feed text detected in an image as an input to our text moderation model, making our model classifications on image-based text accessible with a single API call. Our text model is generally robust to misspellings and character substitutions, enabling high classification accuracies on text extracted via OCR even if errors occur in transcription. 

Hive’s text moderation model can classify extracted text across several sensitive or inappropriate categories, including sexuality, threats or descriptions of violence, bullying, and racism. 

Another critical use-case is moderating spam and doxxing: OCR moderation will quickly and accurately flag images containing emails, phone numbers, addresses and other personal identifiable information.  Finally, our text moderation model can also identify promotions such as soliciting services, asking for shares and follows, soliciting donations, or links to external sites. This gives platforms new tools to curate user experience and remove junk content. 

We understand that verbal communication is rarely black and white – context and linguistic nuance can have profound effects on how meaning and intent of words are perceived. To help navigate these gray areas, our text model responses supplement classifications with a score from benign (score = 0) to severe (score = 3), which can be used to adapt any necessary moderation actions to platforms’ individual needs and sensitivities. You can read more about our text models in previous blog posts or in our documentation.

Our currently supported moderation classes in each language are as follows:

LanguageClasses
EnglishSexual, Hate, Violence, Bullying
SpanishSexual, Hate
PortugueseSexual, Hate
FrenchSexual
GermanSexual
HindiSexual
ArabicSexual

Emoji Classification for Text Moderation

Emoji recognition is a unique feature of Hive’s OCR moderation model that opens up new possibilities for identifying harmful or harassing text-based content. Emojis can be particularly useful in moderation contexts because they can subtly (or not-so-subtly) alter how accompanying text is interpreted by the reader. Text that is otherwise innocuous can easily become inappropriate when accompanied by a particular emoji and vice-versa.

Hive OCR is able to detect and classify any emojis supported by Apple, Samsung, or Google devices. Our OCR model currently achieves a weighted accuracy of over 97% when classifying emojis. This enables our text moderation model to account for contextual meaning and connotations of emojis used in input text. 

To get a sense of our model’s understanding, let’s take a look at some examples of how use of emojis (or inclusion of text around emojis) changes our model predictions to align with human understanding. Each of these examples is from a real classification task submitted to our latest model release.

Here’s a basic example of how adding an emoji changes our model response from classifying as clean to classifying as sensitive.  Our models understand not only the verbal concept represented by the emoji, but what the emoji means semantically based on where it is located in the text. In this case, the bullying connotation of the “garbage” or “trash” emoji would be completely missed by an analysis of the text alone. 

Our model is similarly sensitive to changes in semantic meaning caused by substitutions of emojis for text.

In this case, our model catches the sexual connotation added by the eggplant emoji in place of the word “eggplant.” Again, the text alone without an emoji – “lemme see that !” – is completely clean.

In addition to understanding how emojis can alter the meaning of text, our model is also sensitive to how text can change implications of emojis themselves.

Here, adding the phrase “hey hotty” transforms an emoji usually used innocuously into a message with suggestive intent, and our model prediction changes accordingly.  

Finally, Hive’s OCR and text moderation models are trained to differentiate between each skin tone option for emojis in the “People” category and understand their implications in the context of accompanying text. We are currently exploring how the ability to differentiate between light and darker skin tones can enable new tools to identify hateful, racist, or exclusionary text content.

OCR Moderation: Final Thoughts

User preferences for online communication are constantly evolving in both medium and content, which can make it challenging for platforms to keep up with abusive users. Hive prides itself on identifying blindspots in existing moderation tools and developing robust AI solutions using high-quality training data tailored to these use-cases. We hope that this post has showcased what’s possible with our OCR moderation capabilities and given some insight into our future directions. 

Feel free to contact sales@thehive.ai if you are interested in adding OCR capabilities to your moderation suite, and please stay tuned as we announce new features and updates!


BACK TO ALL BLOGS

New and Improved AI Models for Audio Moderation

Live streaming, online voice chat, and teleconferencing have all exploded in popularity in recent years. A wider variety of appealing content, shifting user preferences, and unique pressures of the coronavirus pandemic have all been major drivers of this growth. Daily consumption of video and audio content has steadily increased year-over-year, with a recent survey indicating that a whopping 90% of young people watch video content daily across a variety of platforms. 

As the popularity of user-generated audio and video increases, so too does the difficulty of moderating this content efficiently and effectively. While images and text can usually be analyzed and acted on quickly by human moderators, audio/video content – whether live or pre-recorded – is lengthy and linear, requiring significantly more review time for human moderation teams. 

Platforms owe it to their users to provide a safe and inclusive online environment. Unfortunately, the difficulties of moderating audio and video – in addition to the sheer volume of content – have led to passive moderation approaches that rely on after-the-fact user reporting. 

At Hive, we offer access to robust AI audio moderation models to help platforms meet these challenges at scale. With Hive APIs, platforms can access nuanced model classifications of their audio content in near-real time, allowing them to automate enforcement actions or quickly pass flagged content to human moderators for review. By automating audio moderation, platforms can cast a wider net when analyzing their content and take action more quickly to protect their users. 

How Hive Can Help: Speech Moderation

We built our audio solutions to identify harmful or inappropriate speech with attention to context and linguistic subtleties. By natively combining real-time speech-to-text transcription with our best-in-class text moderation model, Hive’s audio moderation API makes our model classifications and a full transcript of any detected speech available with a single API call.  Our API can also analyze audio clips sampled from live content and produce results in 10 seconds or less, providing real-time content intelligence that lets platforms act quickly.

Speech Transcription

Effective speech moderation needs to start with effective speech transcription, and we’ve been working hard to improve our transcription performance. Our transcription model is trained on moderation-relevant domains such as video game streams, game lobbies, and argumentative conversations.

In a recent head-to-head comparison, Hive’s transcription model outperformed or was competitive with top public cloud providers on several publicly available datasets (the evaluation data for each set was withheld from training). 

Each evaluation dataset consisted of about 10 hours of recorded English speech with varying accents and audio quality. As shown, Hive’s transcription model achieved lower word error rates than top public cloud models. This measures the ratio of incorrect words, missed words, and inserted words to the total number of words in the reference, implying Hive’s accuracy was 10-20% higher than competing solutions. 

Audio Moderation

Hive’s audio moderation tools go beyond producing a transcript – we then apply our best-in-class text moderation model to understand the meaning of that speech in context. Here, Hive’s advantage starts with our data. We operate the largest distributed data-labeling workforce in the world, with over five million Hive annotators providing accurate and consensus-driven training labels on diverse example text sourced from relevant domains. For our text models, we leaned on this capability to produce a vast, proprietary training set with millions of examples annotated with human classifications. 

Our models classify speech across five main moderation categories: sexual content, bullying, hate speech, violence, and spam. With ample training data at our disposal, our models achieve high accuracy in identifying these types of sensitive speech, especially at the most severe level. Our hate speech model, for example, achieved a balanced accuracy of ~95% in identifying the most severe cases with a 3% false positive rate (based on a recent evaluation using our validation data). 

Thoughtfully-chosen and accurately labeled training data is only part of our solution here. We also designed our verbal models to provide multi-leveled classifications in each moderation category. Specifically, our model will return a severity score ranging from 0 to 3 (most severe) in each major moderation class based on its understanding of full sentences and phrases in context. This gives our customers more granular intelligence on their audio content and the ability to tailor moderation actions to community guidelines and user expectations. Alternatively,  borderline/controversial cases can be quickly routed to human moderators for review.  

In addition to model classifications, our model response object includes a punctuated transcript with confidence scores for each word to allow more insight into your content and enable quicker review by human moderators if desired. 

Language Support

We recognize that many platforms’ moderation needs extend beyond English-speaking users. At the time of writing, we support audio moderation for English, Spanish, Portuguese, French, German, Hindi, and Arabic. We train each model separately with an eye towards capturing subtleties that vary across cultures and regions. Our currently supported moderation classes in each language are as follows: 

We frequently update our models to add support for our moderation classes in each language, and are currently working to add more support for these and other widely spoken languages. 

Beyond Words: Sound Classification

Hive’s audio moderation model also offers the unique ability to detect and classify undesirable sounds. This opens up new insights into audio content that may not be captured by speech transcription alone. For example, our audio model can detect explicit or inappropriate noises, shouting, and repetitive or abrasive noises to enable new modalities for audio filtering and moderation. We hope that these sound classifications can help platforms identify toxic behaviors beyond bad speech and take action to improve user experience. 

Final Thoughts: Audio Moderation

Hive audio moderation makes it simple to access accurate, real-time intelligence on audio and video content and take informed moderation actions to enforce community guidelines. Our solution is nimble and scalable, helping platforms of all sizes grow with peace of mind. We believe our tools can have a significant impact in curbing toxic or abusive behavior online and lead to better experiences for users.

At Hive, we pride ourselves on continuous improvement. We are frequently optimizing and adding features to our models to increase their understanding and cover more use cases based on client input. We’d love to hear any feedback or suggestions you may have, and please stay tuned for updates!

BACK TO ALL BLOGS

Reuters