Live streaming, online voice chat, and teleconferencing have all exploded in popularity in recent years. A wider variety of appealing content, shifting user preferences, and unique pressures of the coronavirus pandemic have all been major drivers of this growth. Daily consumption of video and audio content has steadily increased year-over-year, with a recent survey indicating that a whopping 90% of young people watch video content daily across a variety of platforms.
As the popularity of user-generated audio and video increases, so too does the difficulty of moderating this content efficiently and effectively. While images and text can usually be analyzed and acted on quickly by human moderators, audio/video content – whether live or pre-recorded – is lengthy and linear, requiring significantly more review time for human moderation teams.
Platforms owe it to their users to provide a safe and inclusive online environment. Unfortunately, the difficulties of moderating audio and video – in addition to the sheer volume of content – have led to passive moderation approaches that rely on after-the-fact user reporting.
At Hive, we offer access to robust AI audio moderation models to help platforms meet these challenges at scale. With Hive APIs, platforms can access nuanced model classifications of their audio content in near-real time, allowing them to automate enforcement actions or quickly pass flagged content to human moderators for review. By automating audio moderation, platforms can cast a wider net when analyzing their content and take action more quickly to protect their users.
How Hive Can Help: Speech Moderation
We built our audio solutions to identify harmful or inappropriate speech with attention to context and linguistic subtleties. By natively combining real-time speech-to-text transcription with our best-in-class text moderation model, Hive’s audio moderation API makes our model classifications and a full transcript of any detected speech available with a single API call. Our API can also analyze audio clips sampled from live content and produce results in 10 seconds or less, providing real-time content intelligence that lets platforms act quickly.
Speech Transcription
Effective speech moderation needs to start with effective speech transcription, and we’ve been working hard to improve our transcription performance. Our transcription model is trained on moderation-relevant domains such as video game streams, game lobbies, and argumentative conversations.
In a recent head-to-head comparison, Hive’s transcription model outperformed or was competitive with top public cloud providers on several publicly available datasets (the evaluation data for each set was withheld from training).
Each evaluation dataset consisted of about 10 hours of recorded English speech with varying accents and audio quality. As shown, Hive’s transcription model achieved lower word error rates than top public cloud models. This measures the ratio of incorrect words, missed words, and inserted words to the total number of words in the reference, implying Hive’s accuracy was 10-20% higher than competing solutions.
Audio Moderation
Hive’s audio moderation tools go beyond producing a transcript – we then apply our best-in-class text moderation model to understand the meaning of that speech in context. Here, Hive’s advantage starts with our data. We operate the largest distributed data-labeling workforce in the world, with over five million Hive annotators providing accurate and consensus-driven training labels on diverse example text sourced from relevant domains. For our text models, we leaned on this capability to produce a vast, proprietary training set with millions of examples annotated with human classifications.
Our models classify speech across five main moderation categories: sexual content, bullying, hate speech, violence, and spam. With ample training data at our disposal, our models achieve high accuracy in identifying these types of sensitive speech, especially at the most severe level. Our hate speech model, for example, achieved a balanced accuracy of ~95% in identifying the most severe cases with a 3% false positive rate (based on a recent evaluation using our validation data).
Thoughtfully-chosen and accurately labeled training data is only part of our solution here. We also designed our verbal models to provide multi-leveled classifications in each moderation category. Specifically, our model will return a severity score ranging from 0 to 3 (most severe) in each major moderation class based on its understanding of full sentences and phrases in context. This gives our customers more granular intelligence on their audio content and the ability to tailor moderation actions to community guidelines and user expectations. Alternatively, borderline/controversial cases can be quickly routed to human moderators for review.
In addition to model classifications, our model response object includes a punctuated transcript with confidence scores for each word to allow more insight into your content and enable quicker review by human moderators if desired.
Language Support
We recognize that many platforms’ moderation needs extend beyond English-speaking users. At the time of writing, we support audio moderation for English, Spanish, Portuguese, French, German, Hindi, and Arabic. We train each model separately with an eye towards capturing subtleties that vary across cultures and regions. Our currently supported moderation classes in each language are as follows:
We frequently update our models to add support for our moderation classes in each language, and are currently working to add more support for these and other widely spoken languages.
Beyond Words: Sound Classification
Hive’s audio moderation model also offers the unique ability to detect and classify undesirable sounds. This opens up new insights into audio content that may not be captured by speech transcription alone. For example, our audio model can detect explicit or inappropriate noises, shouting, and repetitive or abrasive noises to enable new modalities for audio filtering and moderation. We hope that these sound classifications can help platforms identify toxic behaviors beyond bad speech and take action to improve user experience.
Final Thoughts: Audio Moderation
Hive audio moderation makes it simple to access accurate, real-time intelligence on audio and video content and take informed moderation actions to enforce community guidelines. Our solution is nimble and scalable, helping platforms of all sizes grow with peace of mind. We believe our tools can have a significant impact in curbing toxic or abusive behavior online and lead to better experiences for users.
At Hive, we pride ourselves on continuous improvement. We are frequently optimizing and adding features to our models to increase their understanding and cover more use cases based on client input. We’d love to hear any feedback or suggestions you may have, and please stay tuned for updates!