BACK TO ALL BLOGS 3 Tips and Tricks to Building ML Models HiveJune 23, 2023March 5, 2025 Hive was thrilled to have our CTO Dmitriy present at the Workshop on Multimodal Content Moderation during CVPR last week, where we provided an overview of a few important considerations when building machine learning models for classification tasks. What are the effects of data quantity and quality on model performance? Can we use synthetic data in the absence of real data? And after model training is done, how do we spot and address bias in the model’s performance? Read on to learn some of the research that has made our models truly best-in-class. The Importance of Quality Data Data is, of course, a crucial component in machine learning. Without data, models would have no examples to learn from. It is widely accepted in the field that the more data you train a machine learning model with, the better. Similarly, the cleaner that data is, the better. This is fairly intuitive — the basic principle is true for human learners, too. The more examples to learn from, the easier it is to learn. And if those examples aren’t very good? Learning becomes more difficult. But how important is good, clean data to building a good machine learning model? Good data is not always easy to come by. Is it better to use more data at the expense of having more noise? To investigate this, we trained a binary image classifier to detect NSFW content, varying the amount of data between 10 images and 100k images. We also varied the noise by flipping a percentage of the labels on between 0% and 50% of the data. We then plotted the balanced accuracy of the resulting models using the same test set. The result? It turns out that data quality is more important than we may think. It was clear that, as expected, accuracy was the best when the data was both as large as possible (100k examples) and as clean as possible (0% noise). From there, however, the table gets more interesting. As seen above, the model trained with only 10k data and no noise performs better than the model trained with ten times as much data at 100k and 10% noise. The general trend appears to be similar — clean data matters very much, and it can quickly tank performance even when using the maximum amount of data. In other words, less data is sometimes preferable to more data if it is cleaner. We wondered how this would change with a more detailed classification problem, so we built a new binary image classifier. This time, we trained the model to detect images of smoking, which is detecting signal from a small part of an image. The outcome, shown below, echoes the results from the NSFW model — clean data has a great impact on performance even with a very large dataset. But the quantity of data appears to be more important than it was in the NSFW model. While 5000 examples with no noise got around 90% balanced accuracy for the NSFW model, that same amount of noiseless data only got around 77% for the smoking classifier. The increase in performance, while still strongly tied to data quantity, was noticeably slower and only the largest datasets produced well-performing models. It makes sense that quantity of data would be more important with a more difficult classification task. Data noise also remained a crucial factor for the models trained with more data — the 50k model with 10% noise performed about the same as the 100k model with 10% noise, illustrating once more that more data is not always better if it is still noisy. Our general takeaways here are that while both data quality and quantity matter quite a bit, clean data is more important beyond a certain quantity threshold. This threshold is where performance increases begin to plateau as the data grows larger, yet noisy data continues to have significant effects on model quality. And as we saw by comparing the NSFW model and the smoking one, this quality threshold also changes depending on the difficulty of the classification task itself. Training on Synthetic Data: Does it Help or Hurt? So having lots of clean data is important, what can be done when good data is hard to find or costly to acquire? With the rise of AI image generation over the past few years, more and more companies have been experimenting with generated images to supplement visual datasets. Can this kind of synthetic data be used to train visual classification models that will eventually classify real data? In order to try this out, we trained five different binary classification models to detect smoking. Three of the models were trained exclusively with real data (10k, 20k, and 40k examples respectively), one was a mix of real and synthetic images (10k real and 30k synthetic), and one was trained entirely on synthetic data (40k). Each datatest had an even split of 50% smoking and 50% nonsmoking examples. To evaluate the models, we used two balanced test sets: one with 4k real images and one with 4k of synthetic images. All synetic images were created using Stable Diffusion. Looking at the precision and recall curves for the various models, we made an interesting discovery. Unsurprisingly, the largest of the entirely real datasets performed the best (40k). The one trained on 10k real images and 30k synthetic images performed significantly better than the one trained only on 10k real images. These results suggest that while large amounts of real data are best, a mixture of synthetic and real data could in fact boost model performance when little data is available. Keeping an Eye Out For Bias After model training is finished, extensive testing must be done in order to make sure there aren’t any biases in the model results. Biases can come in the form of biases that exist in the real world and are thus often ingrained in real-world data, such as racial bias or gender bias, but can also come in the form of biases that occur in the data by coincidence. A great example of how unpredictable certain biases can be came recently during a model training for NSFW detection, where the model started flagging many pictures of computer keyboards as false positives. Upon closer investigation, this occurred because many of the NSFW pictures in our training data were photos of computers whose screens were displaying explicit content. Since the computer screens were the focus of these images, keyboards were also often included, leading to the false association that keyboards are an indicator of NSFW imagery. Three images that were falsely categorized as NSFW In order to correct this bias, we added more non-NSFW keyboard examples to the training data. Improving this bias in this way not only helps the model by addressing the bias itself, but also boosts general model performance. Of course, addressing bias is even more critical when dealing with data that carries current or historical biases against minority groups, thereby perpetuating them by ingraining them into future technology. The importance of detecting and correcting these biases cannot be overstated, since leaving them unaddressed carries a significant amount of risk beyond simply calling a keyboard NSFW. Regardless of the type of bias, it’s important to note that biases aren’t always readily apparent. The original model prior to addressing the bias had a balanced accuracy of 80%, which is high enough that the bias may not have been immediately noticeable since errors weren’t extremely frequent. The takeaway here is thus not just that bias correction matters, but that looking into potential biases is necessary even when you might not think they’re there. Takeaways Visual classification models are in many ways the heart of Hive — they were our main launching point into the space of content moderation and AI-powered APIs more broadly. We’re continuously searching for ways to keep improving these models as the research surrounding them grows and evolves. Conclusions like those discussed here — the importance of clean data, particularly when you have lots of it, the possible use of synthetic data when real data is lacking, and the need to find and correct all biases (don’t forget about the unexpected ones!) — greatly inform the way we build and maintain our products.
BACK TO ALL BLOGS Build Your Own Custom ML Models with Hive AutoML HiveMay 3, 2023March 5, 2025 We’re excited to announce Hive’s new AutoML tool that provides customers with everything they need to train, evaluate, and deploy customized machine learning models. Our pre-trained models solve a wide range of use cases, but we will always be bounded by the number of models we can build. Now customers who find that their unique needs and moderation guidelines don’t quite match with any of our existing solutions can create their own, custom-built for their platform and easily accessible via API. AutoML can be used to augment our current offerings or to create new models entirely. Want to flag a particular subject that doesn’t exist as a head in our Text Moderation API, or a certain symbol or action that isn’t part of our Visual Moderation? With AutoML, you can quickly build solutions for these problems that are already integrated with your Hive workflow. Let’s walk through our AutoML process to illustrate how it works. In this example, we’ll build a text classification model that can determine whether or not a given news headline is satirical. First, we need to get our data in the proper format. For text classification models, all dataset files must be in CSV format. One column should contain the text data (titled text_data) and all other columns represent model heads (classification categories). The values within each row of any given column represent the classes (possible classifications) within that head. An example of this formatting for our satire model is shown below: The first page you’ll see on Hive’s AutoML platform is a dashboard with all of your organization’s training projects. In the image below, you’ll see how the training and deployment status of old projects are displayed. To create our satire classifier, we’re going to make a new project by hitting the “Create New Project” button in the top right corner. We’ll then be prompted to provide a name and description for the project as well as training data in the form of a CSV file. For test data, you can either upload a separate CSV file or choose to randomly split your training data into two files, one to be used for training and the other for testing. If you decide to split your data, you will be able to choose the percentage that you would like to split off. After all of that is entered, we are ready to train! Beginning model training is as easy as hitting a single button. While your model trains, you can easily view its training status on the Training Projects page. Once training is completed, your project page will show an analysis of the model’s performance. The boxes at the top allow you to decide if you want to look at this analysis for a particular class or overall. If you’re building a multi-headed model, you can choose which head you’d like to evaluate as well. We provide precision, recall, and balanced accuracy for all confidence thresholds as well as a PR curve. We also display a confusion matrix to show how many predictions were correct and incorrect per class. Once you’re satisfied with your model’s performance, select the “Create Deployment” to launch the model. Similarly to model training, deployment will take a few moments. After model deployment is complete, you can view the deployment in your Hive customer dashboard, where you can access your API key, view current tasks, as well as access other information as you would with our pre-trained models. We’re very excited to be adding AutoML to our offerings. The platform currently supports both text and image classification, and we’re working to add support for large language models next. If you’d like to learn more about our AutoML platform and other solutions we’re building, please feel free to reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS Flag AI-Generated Text with Hive’s New Classifier HiveFebruary 1, 2023March 5, 2025 Hive is excited to announce our new classifier to differentiate between AI-generated and human-written text. This model is hosted on our website as a free demo, and we encourage users to test out its performance. The recent release of OpenAI’s ChatGPT model has raised questions about how public access to these kinds of large language models will impact the field of education. Certain school districts have already banned access to ChatGPT, and teachers have been adjusting their teaching methods to account for the fact that generative AI has made academic dishonesty a whole lot easier. Since the rise of internet plagiarism, plagiarism detectors have become commonplace at academic institutions. Now a need arises for a new kind of detection: AI-generated text. Our AI-Generated Text Detector outperforms key competitors, including OpenAI itself. We compared our model to their detector, as well as two other popular AI-generated text detection tools: GPTZero and Writer’s AI Content Detector. Our model was the clear frontrunner, not just in terms of balanced accuracy but also in terms of false positive rate — a critical factor when these tools are deployed in an educational setting. Our test dataset consisted of 242 text passages, including ChatGPT-generated text as well as human-written text. To ensure that our model behaves correctly on all genres of content, we included everything from casual writing to more technical and academic writing. We took special care to include texts written by those learning English as a second language, so as to be careful that their writing is not incorrectly categorized by our model due to differences in tone or wording. For these test examples, our balanced accuracy stands at an impressive 99% while the closest competitor is GPTZero with 83%. OpenAI got the lowest of the bunch, with only 73%. Others have tried our model against OpenAI’s in particular, and they have echoed our findings. Following OpenAI’s classifier release, Mark Hachman at PCWorld published an article that suggested that those disappointed with OpenAI’s model should turn to Hive’s instead. In his own informal testing of our model, he praised our results for their accuracy as well as our inclusion of clear confidence scores for every result. A large fear about using these sorts of detector tools in an educational setting is the potentially catastrophic impact of false positives, or cases in which human-written writing is classified as AI-generated. While building our model, we were mindful of the fact that the risk of such high-cost false positives is one that many educators may not want to take. In response, we prioritized lowering our false positive rate. On the test set above, our false positive rate is incredibly low, at 1%. This is compared to OpenAI’s at 12.5%, Writer’s at 46%, and GPTZeros at 30%. Even with our low false positive rate, we do encourage that this tool be used as part of a broader process when investigating academic dishonesty and not as the sole decision maker. Just like plagiarism checkers, it is created to be a helpful screening tool and not a final judge. We are continuously working to improve our model, and any feedback is greatly appreciated. Large language models like ChatGPT are here to stay, and it is crucial to provide educators with tools they can use as they decide how to navigate these changes in their classrooms.
BACK TO ALL BLOGS Spot Deepfakes With Hive’s New Deepfake Detection API HiveNovember 2, 2022July 4, 2024 Contents The Danger of DeepfakesA Look Into Our ModelPutting It All Together: Example Input and ResponseFinal Thoughts The Danger of Deepfakes When generative AI models first gained popularity in the late 2010s, they brought with them the ability to create deepfakes. Deepfakes are synthetic media, typically video, in which one person’s likeness is replaced by another’s using deep learning. They are powerful tools for fraud and misinformation, allowing for the creation of synthetic videos of political leaders and letting scammers easily take on new identities. The primary use, though, of deepfake technology is the fabrication of nonconsensual pornography. The term “deepfake” itself was coined in 2017 by a Reddit user of the same name who made fake pornographic videos featuring popular female celebrities. In 2019, the company Sensity AI catalogued deepfakes across the web and reported that a whopping 96% of them were pornographic, all of which were of women. In the years since, more of this sort of deepfake pornography has become readily available online, with countless forums and even entire porn sites dedicated to it. The targets of this are not just celebrities. They are also everyday women superimposed into adult content by request—on-demand revenge porn for anyone with an internet connection. Many sites have banned deepfakes entirely, since they are far more often used for harm than for good. At Hive, we’re committed to providing API-accessible solutions for challenging moderation problems like this one. We’ve built our new Deepfake Detection API to empower enterprise customers to easily identify and moderate deepfake content hosted on their platforms. This blog post explains how our model identifies deepfakes and the new API that makes this functionality accessible. A Look Into Our Model Hive’s Deepfake Detection model is essentially a version of our Demographic API that is optimized to identify deepfakes as opposed to demographic attributes. When a query is submitted, this visual detection model locates any faces present in the input. It then performs an additional classification step that determines whether or not each detected face is a deepfake. In its response, it provides a bounding-box location and classification (with confidence scores) for each face. While the face detection aspect of this process is the same as the one used for our industry-leading Demographic API, the classification step was fine-tuned for deepfake identification by training on a vast repository of synthetic and real video data. Many of these examples were pulled from genres commonly associated with deepfakes, such as pornography, celebrity interviews, and movie clips. We also included other types of examples in order to create a classifier that identifies deepfakes across many different content genres. Putting It All Together: Example Input and Response With only one head, the response of our Deepfake Detection model is easily interpretable. When an image or video query is submitted, it is first split into frames. Each frame is then analyzed by our visual detection model in order to find any faces present in the image. Every face then receives a deepfake classification — either yes_deepfake or no_deepfake. Confidence scores for these classifications range from 0.0 to 1.0, with a higher score indicating higher confidence in the model’s results. Example Deepfake Detection input and API response Here we see the deepfaked image and, to its left, the two original images used to create it. This input image doesn’t appear to be fake at first glance, especially when the image is displayed at a small size. Even with a close examination, a human reviewer could fail to realize that it is actually a deepfake. As the example illustrates, the model correctly identifies this realistic deepfake with a high confidence score of more than 0.99. Since there is only one face present in this image, we see one corresponding “bounding poly” in the response. This “bounding poly” contains all model response information for that face. Vertices and dimensions are also provided, though those fields are truncated here for clarity. Because deepfakes like this one can be very convincing, they are difficult to moderate with manual flagging alone. Automating this task is not only ideal to accelerate moderation processes, but also to spot realistic deepfakes that human reviewers might miss. Digital platforms, particularly those that host NSFW media, can integrate this Deepfake Detection API into their workflows by automatically screening all content as it is posted. Video communication platforms and applications that use any kind of visual identity verification can also utilize our model to counter deepfake fraud. Final Thoughts Hive’s Deepfake Detection API joins our recently released AI-Generated Media Recognition API in the aim to expand content-moderation to keep up with the fast-growing domain of generative AI. Moving forward, we plan to continually update both models so as to best keep up with new generative techniques, popular content genres, and emerging customer needs. The recent popularity of diffusion models like Stable Diffusion, Midjourney, and DALL-E 2 has brought deepfakes back into the spotlight and sparked conversation on whether these newer generative techniques can be used to develop brand-new ways of making them. Whether or not this happens, deepfakes aren’t going away any time soon and are only growing in number, popularity, and quality. Identifying and removing them across online platforms is crucial to limit the fraud, misinformation, and digital sexual abuse that they enable. If you’d like to learn more about our Deepfake Detection API and other solutions we’re building, please feel free to reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS Detect and Moderate AI-Generated Artwork Using Hive’s New API HiveSeptember 23, 2022July 5, 2024 Try Our Demo To try our AI-Generated Image Detection model out for yourself, check out our demo. Contents A New Need for Content ModerationUsing AI to Identify AI: Building Our ClassifierHow it Works: An Example Input and ResponseFinal Thoughts and Future Directions A New Need for Content Moderation In the past few months, AI-generated art has experienced rapid growth in both popularity and accessibility. Engines like DALL-E, Midjourney, and Stable Diffusion have spurred an influx of AI-generated artworks across online platforms, prompting an intense debate around their legality, artistic value, and potential for enabling the propagation of deepfake-like content. As a result, certain digital platforms such as Getty Images, InkBlot Art, Fur Affinity, and Newgrounds have announced bans on AI-generated content entirely, with more to likely follow in the coming weeks and months. Platforms are enacting these bans for a variety of reasons. Online communities built for artists to share their artwork such as Newgrounds, Fur Affinity, and Purpleport stated they put their AI artwork ban in place in order to keep their sites focused exclusively on human-created art. Other platforms have taken action against AI-generated artwork due to copyright concerns. Image synthesis models often include copyrighted images in their training data, which consist of massive amounts of photos and artwork scraped from across the web, typically without any artists’ consent. It is an open question whether this type of scraping and the resulting AI-generated artwork amount to copyright violations — particularly in the case of commercial use — and platforms like Getty and InkBlot Art don’t want to take that risk. As part of Hive’s commitment to providing enterprise customers with API-accessible solutions to moderation problems, we have created a classification model made specifically to assist digital platforms in enacting these bans. Our AI-Generated Media Recognition API is built with the same type of robust classification model as our industry-leading visual moderation products, and it enables enterprise customers to moderate AI-generated artwork without relying on users to flag images manually. This post explains how our model works and the new API that makes this functionality accessible. Using AI to Identify AI: Building Our Classifier Hive’s AI-Generated Media Recognition model is optimized for use with the kind of media generated by popular AI generative engines such as DALL-E, Midjourney, and Stable Diffusion. It was trained on a large dataset comprising millions of artificially generated images and human-created images such as photographs, digital and traditional art, and memes sourced from across the web. The resulting model is able to identify AI-created images among many different types and styles of artwork, even correctly identifying AI artwork that could be misidentified by manual flagging. Our model returns not only whether or not a given image is AI-generated, but also the likely source engine it was generated from. Each classification is accompanied by a confidence score that ranges from 0.0 to 1.0, allowing customers to set a confidence threshold to guide their moderation. How it Works: An Example Input and Response When receiving an input image, our AI-Generated Media Recognition model returns classifications under two separate heads. The first provides a binary classification as to whether or not the image is AI-generated. The second, which is only relevant when the image is classified as an AI-made image, identifies the source of that artificial image from among the most popular generation engines that are currently in use. To get a sense of the capabilities of our AI-Generated Media Recognition model, here’s a look at an example classification: This input image was created with the AI model Midjourney, though it is so realistic that it may be missed by manual flagging. As shown in the response above, our model correctly classifies this image as AI-generated with a high confidence score of 0.968. The model also correctly identifies the source of the image, with a similarly high confidence score. Other sources like DALL-E are also returned along with their respective confidence scores, and the scores under each of the two model heads sum to 1. Platforms that host artwork of any kind can integrate this AI-Generated Media Recognition API into their workflows by automatically screening all content as it is being posted. This method of moderating AI artwork works far more quickly than manual flagging and can catch realistic artificial artworks that even human reviewers might miss. Final Thoughts and Future Directions Digital platforms are now being flooded with AI-generated content, and that influx will only increase as these generative models continue to grow and spread. On top of this, creating this kind of artwork is fast and easy to access online, which enables large quantities of it to be produced quickly. Moderating artificially created artworks is crucial for many sites to maintain their platform’s mission and protect themselves and their customers from potential legal issues further down the line. We created our AI-Generated Media Recognition API to solve this problem, but our model will need to continue to evolve along with image generation models as existing ones improve and new ones are released. We plan on adding new generative engines to our sources as well as continually updating our model to keep up with the current capabilities of these models. Since some newer generative models can create video in addition to still images, we are working to add support for video formats within our API in order to best prevent all types of AI-generated artwork from dominating online communities where they are unwelcome. If you’d like to learn more about this and other solutions we’re building, please feel free to reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS Mensio Product Update HiveJuly 28, 2022July 4, 2024 Mensio aims to provide users with the most comprehensive and granular data available in the industry to inform better decisions on how to optimize the investment of marketing dollars. In recent months, brands, agencies, and rights holders alike have expressed interest in being able to a) measure the presence and value of verbal mentions within programming alongside visual exposures and b) within sports, understand the relative contributions of different sponsorship assets to total exposure and its associated value. We are excited to announce a set of major product upgrades to incorporate these requests now live within Mensio’s Sponsorship & Branded Content modules. While we have supported these capabilities “off-platform” using Hive’s Logo Location and Brand Mentions models for multiple years, the inclusion of these capabilities in-platform provide reduced friction, faster access to data, and richer levels of brand- and property-level analysis as well as competitive intelligence. Below is a brief summary of what’s new; as with all releases, notes will also appear as a pop-up in-platform upon your next log-in. Your Hive point of contact will additionally introduce the new capabilities live in your next scheduled meeting and, if not imminent, will be reaching out to schedule time for an overview of the new features at your earliest convenience. New Features Now Available: Reporting by Asset Type For Televised Sports Programming Sponsorship & Branded Content modules now include reporting of exposures by asset type across most televised sports programming. Reporting includes 25+ standard asset types including jerseys, TVGI / digital overlays, lower level banners (i.e., outfield wall, dasherboards, courtside LED, etc.), basket stanchions, and more. Within the platform, asset types are integrated as filters into existing reporting of visual exposures by brand, by program, and by occurrence. Additionally, two new pages have been added featuring asset-centric views of exposure by brand and by program. Data is currently available for all relevant programming since June 1, 2022, and will be available going back to September 1, 2022 shortly. Notifications will appear within the platform as additional historical information becomes available. Now Available: Reporting of Verbal Mentions Across Television Programming Sponsorship & Branded Content television modules now include reporting of verbal mentions across all television programming. Verbal mentions are integrated into summary metrics in the Competitive Insights section, and have dedicated pages for deep dives by brand, by program, and by occurrence. Data is currently available for all relevant programming since June 1, 2021, and will be available going back to October 1, 2018 shortly. Notifications will appear within the platform as additional historical information becomes available. Now Available: Updated Module Definition and Navigation To accommodate the expanded data, we have reconfigured module contents and navigation for Sponsorship & Branded Content television modules. Specifically: “Television – By Brand” merges “National TV (Branded Content)” and “Regional Sports TV (Branded Content)” into a single module, where programming across network types can be viewed in a single chart (and can be separated using the Network Type filter if desired)“Television – By Team” replaces “TV – Team Sponsorship”, maintaining the ability to additionally filter brand exposures by the associated sports team(s). The programming in this module includes all available NFL, NBA, MLB, and NHL live games and replays across national television and regional sports networks, as well as team-specific studio shows (e.g., Warriors Postgame)“Television – Teams as Brands” replaces “National TV – Team Exposure”, maintaining the ability to view team-level exposures in sports talk and highlightsThe sidebar design across the Television – By Brand and Television – By Team modules has been evolved to accommodate additional metrics and streamline access to individual charts and tables We are excited by initial feedback to these module updates, and look forward to continuing to provide product innovation on a regular basis. Please reach out to your representative with any questions or needs as you experience the module upgrades. We look forward to your continued feedback and thank you for your trust in Mensio.