What does it mean to be “best-in-class”?
We often refer to our models as “industry-leading” or “best-in-class,” but what does this actually mean in practice? How are we better than our competitors, and by how much? It is easy to throw these terms around, but we mean it — and we have the evidence to back it up. In this blog post, we’ll be walking through some of the benchmarks that we have run against similar products to show how our models outperform the competition.
Visual Moderation
First, let’s take a look at one of our oldest and most popular models: visual moderation. To compare our model to its major competitors, we ran a test set of NSFW, suggestive, and clean images through all models.
Visual moderation is a classification task — in other words, the model’s job is to classify each submitted image into one of several categories (in this case, NSFW or Clean). A popular and effective metric to measure performance in classification models is by looking at their precision and recall. Precision is the number of true positives (i.e., correctly identified NSFW images) over the number of predicted positives (images predicted to be NSFW). Recall is the number of true positives (correctly identified NSFW images) over the number of ground-truth positives (actual NSFW images).
There is a tradeoff between the two. If you predict all images to be NSFW, you will have perfect recall — you caught all the NSFW images! — but horrible precision because you incorrectly classified many clean images as NSFW. The goal is to have both high recall and high precision, no matter what confidence threshold is used.
With our visual moderation models, we’ve achieved this. We plotted the results of our test as a precision/recall curve, showing that even at high recall we maintain high precision and vice versa while our competitors fall behind us.
The above plot is for NSFW content detection. Our precision at 90% recall is nearly perfect at 99.6%, which makes our error rate a whopping 45 times lower than Public Cloud C. Even Public Clouds A and B, which are closer to us in performance, have error rates 12.5 times higher and 22.5 times higher than ours respectively.
We also benchmarked our model for suggestive content detection, or content that is inappropriate but not as explicit as our NSFW category. Hive’s error rate remains far below the other models, resting at 6 times lower than Public Cloud A and 12 times lower than Public Cloud C. Public Cloud B did not offer a similar category and thus could not be compared.
We only ran our test on NSFW/explicit imagery more broadly because our competitors do not have equivalent classes to ours for other visual moderation classes such as drugs, gore, and terrorism. This makes comparisons difficult, though it also in itself speaks to the fact that we offer far more classes than many of our competitors. With more than 90 subclasses, our visual moderation model far exceeds its peers in terms of the granularity of our results — we don’t just have classes for NSFW, but also for nudity, underwear, cleavage, and other smaller categories that offer our customers a more more in-depth understanding of their content.
Text Moderation
We used precision/recall curves to compare our text moderation model as well. For this comparison, we charted our performance across eight different classes. Hive outperforms all peer models on every single one.
Hive’s error rate on sexual content is 4 times lower than its closest competitor, Public Cloud B. Our other two competitors for that class both have error rates 6 times higher. The threat class boasts similar metrics, with Hive’s error rate between 2 and 4 times lower than all its peers.
Hive’s model for hateful content detection is on par with our competitors, remaining slightly ahead on all thresholds. Our model for bullying content does the same, with an error rate 2 times lower than all comparable models.
Hive is one of few companies to offer text moderation for drugs and weapons, and our error rates here are also worth noting — our only competitor has an error rate 4 and 8 times higher than ours for drugs and weapons respectively.
Hive also offers the child exploitation class, one that few others provide. With this class, we achieve an error rate 8 times lower than our only other major competitor.
Audio Moderation
For Audio Moderation, we evaluate our model using word error rate (WER), which is the gold-standard metric for a speech recognition system. Word error rate is the number of errors divided by the total number of words transcribed, and a perfect word error rate is 0. As you can see, we achieve the best or near-best performance across a variety of languages.
We excel across the board, with the lowest word error rate on the majority of the languages offered. On Spanish in particular, our word error rate is more than 4 times lower than Public Cloud B.
For German and Italian we are very close behind Public Cloud C and remain better than all other competitors.
Optical Character Recognition (OCR)
To benchmark our OCR model, we calculated the F-score for our model as well as several of our competitors. F-score is the harmonic mean of a model’s precision and recall, combining both of them into one measurement. A perfect F-score is 1. When comparing general F-scores, Hive excels as shown below.
We also achieve best-in-class or near-best performance when comparing by language, as shown in the graphs below. With some languages, we excel by quite a large margin. For Chinese and Korean in particular, Hive’s F-score is more than twice all of its competitors. We fall slightly behind in Hindi, yet still perform significantly better than Public Cloud A.
Demographics
We evaluated our age prediction model by calculating mean error, or how far off our age predictions were from the truth. Since the test dataset we used is labeled using age ranges and not individual numbers, mean error is defined as the distance in years from the closest end of the correct age range (i.e., guessing 22 for someone in the range 25-30 is an error of 3 years). A perfect mean error is 0.
As you can see from this distribution, Hive has a significantly lower mean error rate in the three lowest age buckets (0-2, 3-9, and 10-19). In the age range 0-2, our mean error rate is 11 times less than Public Cloud A’s. For the range 3-9 and 10-19, that difference becomes 5 times greater and 3 times greater respectively — still quite a large margin. Hive also excels notably at the oldest age bucket (70+), where our mean error rate is nearly 7 times less than Public Cloud A’s.
For a broader analysis, we compared our overall mean error across all age buckets, as well as the accuracy of our gender predictions.
AutoML
One of the newest additions to our product suite, our AutoML platform allows you to train image classification, text classification, and fine-tune large language models with your own custom datasets. To evaluate the effectiveness of this tool, we used the same test set to train models both on our platform and on competitor’s platforms and measured the performance of the resulting model.
For image classification, we used three different classification tasks to account for the fact that different tasks have different levels of inherent difficulty and thus may yield higher or lower performing models. We also used three different dataset sizes for each classification task in order to measure how well the AutoML platform is able to work with limited amounts of examples.
We compared the resulting models using balanced accuracy, which is the arithmetic mean of a model’s true positive rate and true negative rate. A perfect balanced accuracy is 100%.
As shown in the above tables, Hive achieves best or near-best accuracy across all sets. Our results are quite similar to Public Cloud B’s, pulling ahead on the product dataset. We fell to near-best performance on the smoking dataset, which is the most difficult of the three classification tasks. Even then, we remained within a few percentage points of the winner, Public Cloud B.
For text classification, we trained models for three different categories: sexual content, drugs, and bullying. The results are in the table below. Hive outperforms all competitors on all three categories using all dataset sizes.
Another important consideration when it comes to AutoML is training time. An AutoML tool could build accurate models, but if it takes an entire day to do so it still may not be a great solution. We compared the time it took to train Hive’s text classification tool for the drugs category, and found that our platform was able to train the model 10 times as fast as Private Company A and 32 times as fast as Public Cloud B. And for the smallest dataset size of 100 examples, we trained the model 18 times faster than Private Company A and 268 times faster than Public Cloud B. That’s a pretty significant speedup.
Measuring the performance of fine-tuned LLMs on our foundation model is a bit more complicated. Here we evaluate two different tasks: question answering and closed-domain classification.
To measure performance on the question answering task, we used a metric called token accuracy. Token accuracy indicates how many tokens are the same between the model’s response and the expected response from the test set. A perfect token accuracy is 100%. As shown below, our token accuracy is higher than our competitors or around the same for all dataset sizes.
This is also true for the classification task, where maintained roughly the same performance as Public Cloud A across the various dataset sizes. Below are the full results of our comparison.
Final Thoughts
As illustrated throughout this in-depth look into the performance of our models, we truly earn the title “best-in-class.” We conduct these benchmarks not just to justify that title, but more so as part of our constant effort to make our models the best that they can be. Reviewing these analyses helps us to identify our strengths, yes, but also our weaknesses and where we can improve.
If you have any questions about any of the benchmarks we’ve discussed here or any other questions about our models, please don’t hesitate to reach out to us at sales@thehive.ai.