BACK TO ALL BLOGS

Updated Best-in-Class Automated Content Moderation Model

Improved content moderation suite with additional subclasses; now performs better than human moderators

The gold standard for content moderation has always been human moderators. Facebook alone reportedly employs more than 15,000 human moderators. There are critical problems with this manual approach – namely cost, effectiveness, and scalability. Headlines over recent months and years are scattered with reports of high-profile moderation quality issues, and, increasingly, significant mental health issues affecting full-time content moderators.

Here at Hive, we believe AI can transform industries and business processes. Content moderation is a perfect example: there is an obligation on platforms to do this better, and we believe Hive’s role is to power the ecosystem in better addressing the challenge.

We are excited to announce the general release of our enhanced content moderation product suite, featuring significantly improved NSFW and violence detections. Our NSFW model now achieves 97% accuracy and our violence model achieves 95% accuracy, considerably better than typical outsourced moderators (~80%), and even better than an individual Hive annotator (~93%).

Deep learning models are only as good as the data they are trained on, and Hive operates the world’s largest distributed workforce of humans labeling data – now nearly 2 million contributors globally (see our earlier post).

In our new release, we have more than tripled the training data, built off of a diverse set of user-generated content sourced from the largest content platforms in the world. Our NSFW model is now trained on more than 80 million human annotations and our violence model trained on more than 40 million human annotations.

Model Design

We were selective in our construction of the training dataset, and strategically added the most impactful training examples. For instance, we utilized active learning to select training images where the existing model results were the most uncertain. Deep learning models produce a confidence score on input images which ranges from 0 (very confident the image is not in the class) to 1.0 (very confident the image is in the class). By focusing our labeling efforts on those images in the middle range (0.4 – 0.6), we were able to improve model performance specifically on edge cases.

As part of this release, we also focused on lessening ambiguity in our ‘suggestive’ class in the NSFW model. We conducted a large manual inspection of images where either Hive annotators tended to disagree, or even more crucially, when our model results disagreed with consented Hive annotations. When examining images in certain ground truth sets, we noticed that up to 25% of disagreements between model prediction and human labels were due to erroneous labels, with the model prediction being accurate. Fixing these ground truth images was critical for improving model accuracy. For instance, in the NSFW model, we discovered that moderators disagreed on niche cases, such as which class leggings, contextually implied intercourse, or sheer clothing fell into. By carefully defining boundaries and relabeling data accordingly, we were able to teach the model the distinction in these classes, improving accuracy by as much as 20%.

Classified as clean:

Updated examples of images classified as clean
Figure 1.1 – Updated examples of images classified as clean

Classified as suggestive:

Updated examples of images classified as suggestive
Figure 1.2 – Updated examples of images classified as suggestive

For our violence model, we noticed from client feedback that the classes of knives and guns included instances of these weapons that wouldn’t be considered cause for alarm. For example, we would flag the presence of guns during video games and the presence of knives when cooking. It’s important to note that companies like Facebook have publicly stated the challenge of differentiating between animated and real guns. In this release, the model now distinguishes between culinary knives and violent knives, and animated guns and real guns, by the introduction of two brand new classes to provide real, actionable alerts on weapons.

Hive can now distinguish between animated guns and real guns:

Figure 2 – Examples of animated guns

The following knife picture is not considered violent anymore:

Figure 3 – Examples of culinary knives

Model Performance

The improvement of our new models compared to our old models is significant.

Our NSFW model was the first and most mature model we built, but after increasing training annotations from 58M to 80M, the model still improved dramatically. At 95% recall, our new model’s error rate is 2%, while our old model’s error rate was 4.2% – a decrease of more than 50%.

Precision-recall curve comparing Hive's 2019 NSFW model vs its 2020 NSFW model

Our new violence model was trained on over 40M annotations – a more than 100% increase over the previous training set size of 16M annotations. Performance also improved significantly across all classes. At 90% recall, our new model’s error rate decreased from 27% to 10% (a 63% decrease) for guns, 23% to 10% (a 57% decrease) for knives, and 34% to 20% (a 41% decrease) for blood.

Benchmarking gun_in_hand, knife_in_hand, and very_bloody heads from Hive's 2019 vs 2020 models

Over the past year, we’ve conducted numerous head-to-head comparisons vs. other market solutions, using both our held-out test sets as well as evaluations using data from some of our largest clients. In all of these studies, Hive’s models came out well ahead of all the other models tested.

Figures 6 and 7 show data in a recent study conducted with one of our clients, Reddit. For this study, Hive processed 15,000 randomly selected images through our new model, as well as the top three public cloud players: Amazon Rekognition, Microsoft Azure, and Google Cloud’s Vision API.

Compared to models from three prominent public cloud companies, Hive's updated NSFW model achieves significantly higher precision and recall scores.

At recall 90%, Hive precision is 99%; public clouds range between 68 and 78%. This implies that our relative error rate is between 22x and 32x lower!

Benchmarking gun_in_hand, knife_in_hand, and very_bloody heads from Hive's updated model vs pubic cloud competitors

The outperformance of our violence model is similarly significant.

For guns, at recall 90%, Hive precision is 90%; public clouds achieve about 8%. This implies that our relative error rate is about 9.2x lower!

For knives, at recall 90%, Hive precision is 89%; public clouds achieve about 13%. This implies that our relative error rate is about 7.9x lower!

For blood, at recall 90%, Hive precision is 80%; public clouds range between 4% and 8%. This implies that our relative error rate is between 4.8x and 4.6x lower!

Final Thoughts

This latest model release raises the bar on what is possible from automated content moderation solutions. Solutions like this will considerably reduce the costs of protecting digital environments and limit the need for harmful human moderation jobs across the world. Over the next few months, stay tuned for similar model releases in other relevant moderation classes such as drugs, hate speech and symbols, and propaganda.

For press inquiries, contact Kevin Guo, Co-Founder and CEO, at kevin.guo@thehive.ai.