BACK TO ALL BLOGS

The Race for Automotive Sponsorship

At a glance:

  • Auto manufacturers earned an estimated $1.1 billion in media value from visual logo exposures on national and regional TV over the past year, with Toyota capturing the highest share of voice across programming
  • TV exposures for auto brands are correlated with sports seasons, with official league sponsors dominating in playoff and championship months; additionally, in the fight for share of voice on TV, auto manufacturers strategically invest in specific categories, with a different brand leader across each sport
  • The return on investment of different placement types can vary; for example, looking at Major League Baseball in the month of April as a case study, exposures of Toyota and Ford outweighed those of league champion Chevrolet
  • Measuring amplification of exposures in shoulder programming and highlights – typically overlooked by sponsors today – doubles the number of unique programs with auto logos exposed and increases media value by 17%

As TV viewership continues to fragment across different platforms, the ability of sponsorships to ensure brand exposure within desired content has never been more important. However, sponsorship activations themselves are fragmented across sports and rights holders (e.g., teams, leagues, broadcast partners), resulting in demand for better data to measure the effectiveness of automakers’ own investments and to monitor a dynamic competitive environment.

The sponsorship landscape among automakers was analyzed using data from Mensio, Hive’s AI-powered media intelligence platform. Here’s what we learned.

1. In aggregate, auto manufacturers (OEMs) garnered an estimated $1.1 billion in media value from visual logo exposure on national and regional TV over the past year. Over 80% of this value was owned by the top 10 most exposed brands (out of a total compared dataset of 67 brands), speaking to market concentration on TV. Toyota led the pack by a large margin as the “Let’s go places” manufacturer indeed went everywhere on TV. With half the estimated media value of Toyota, Ford was the second-highest earning OEM, followed by KiaHonda, and Chevrolet.

2. While sports sponsorships are typically rooted in the objective of aligning automakers’ brands with a given sport and/or team and its fans, sports sponsorship also dictate the time of year when different brands capture outsized share of voice. Each of the official league sponsors of the four largest US sports leagues  experienced spikes in their share of voice of in-content brand exposures during playoff and championship periods: Kia (NBA) in April/May, Honda (NHL) in June/July, Chevrolet (MLB) in October, and Toyota (NFL) in February (its most exposed month, despite high visibility throughout the year). Additionally, Mercedes-Benz’ sponsorship of the U.S. Open rockets up its exposures in the month of September, with over $2M in estimated media value from the Men’s Championship match alone.

3. In the fight for share of voice within in-content brand exposure on television, brands placed differing bets across genres. Within sports, a different brand dominates each league, with official league partners leading the way. However, exposure wasn’t limited to league partners; Ford’s sponsorship of NFL pre-game programming is one example among many team- and broadcast-level activations where brands have competed for share of voice within a sport outside of official league sponsorships. Outside of sports, other genres of entertainment saw other investments by auto companies, such as Mercedes-Benz’ top feature in talk shows and awards/special programming.

4. Zooming in on the first month of the 2022 Major League Baseball season across national TV and regional sports networks presents an interesting early season case study. While Chevrolet – the official league sponsor – will likely increase exposures as the season continues, the brand started the year ranked #3 in share of voice for in-game brand exposures. Heavy team and broadcast sponsorship investments made by Toyota and Ford outweighed Chevy, illustrating alternate tactics to reach the same audience at different points during the season.

5. Given the massive investment and competition for the best placements, it is important for brands to fully measure their onscreen exposures. Currently, most brands are limited to “whistle to whistle” measurement focusing on in-game exposures, and sometimes the additional exposure from social media. The fragmentation of shoulder programming and highlights has traditionally been difficult to measure at scale; however, doing so provides a far more comprehensive understanding of performance from a given activation. Using always-on measurement from Mensio, which reports across every second of every program from 100+ national TV networks and regional sports networks, we estimate that amplification from shoulder programming and highlights almost doubles the number of unique programs with auto brand logos exposed, increasing duration of in-content brand exposures by 32% and the associated equivalent media value by 17%.

Credible competitive intelligence data is critical in making decisions on the best sponsorship placements. Mensio, Hive’s AI-powered media intelligence platform, provides always-on measurement of in-content brand exposure for more than 7,000 brands across 24/7 programming from 100+ national TV channels and regional sports networks. 

Access to credible competitive intelligence data is critical for branded content and sponsorship decisions. Mensio allows brands to understand how their share of voice compares to that of competitors at the program-level and in aggregate. For more information on Mensio or to schedule a demo and learn how Mensio can support your brand, reach out to Hive at demo@thehive.ai. 

Note: This analysis looked at in-program auto manufacturer logo exposures from May 2021 to April 2022 on national and regional TV (excluding commercials) and includes Tier 1, Tier 2, and Tier 3 placements.

BACK TO ALL BLOGS

Deep Learning Methods for Moderating Harmful Viral Content

Contents

Content Moderation Challenges in the Aftermath of Buffalo

The racially-motivated shooting in a Buffalo supermarket – live streamed by the perpetrator and shared across social media – is tragic on many levels.  Above all else, lives were lost and families are forever broken as a result of this horrific attack.  Making matters worse, copies of the violent recording are spreading on major social platforms, amplifying extremist messages and providing a blueprint for future attacks.

Unfortunately, this is not a new problem: extremist videos and other graphic content have been widely shared for shock value in the past, with little regard for the negative impacts. And bad actors are more sophisticated than ever, uploading altered or manipulated versions to thwart moderation systems.

As the world grapples with broader questions of racism and violence, we’ve been working with our partners behind the scenes to help control the spread of this and other harmful video content in their online communities.  This post covers the concerns these partners have raised with legacy moderation approaches, and how newer technology can be more effective in keeping communities safe. 

Conventional Moderation and Copy Detection Approaches

Historically, platforms relied on a combination of user reporting and human moderation to identify and react to harmful content. Once the flagged content reaches a human moderator, enforcement is usually quick and highly accurate. 

But this approach does not scale for platforms with millions (or billions) of users.  It can take hours to identify and act on an issue, especially in the aftermath of a major news event when post activity is highest.  And it isn’t always the case that users will catch bad content quickly: when the Christchurch massacre was live streamed in 2019, it was not reported until 12 minutes after the stream ended, allowing the full video to spread widely across the web.

More recently, platforms have found success using cryptographic hashes of the original video to automatically compare against newly posted videos.  These filters can quickly and proactively screen high volumes of content, but are generally limited to detecting copies of the same video. Hashing checks often miss content if there are changes to file formats, resolutions, and codecs. And even the most advanced “perceptual” hashing comparisons – which preprocess image data in order to consider more abstract features – can be defeated by adversarial augmentations.  

Deep Learning To Advance Video Moderation and Contain Viral Content

Deep learning models can close the moderation capability gap for platforms in multiple ways. 

First, visual classifier models can proactively monitor live or prerecorded video for indicators of violence.  These model predictions enable platforms to shut down or remove content in real-time, preventing the publishing and distribution of videos that break policies in the first place.  The visual classifiers can look for combinations of factors, such as someone holding a gun, bodily injury, blood, and other object or scene information to create automated and nuanced enforcement mechanisms. Specialized training techniques can also accurately teach visual classifiers to identify the difference ​​between real violence and photorealistic violence depicted in video games, so that something like a first-person shooter game walkthrough is not mistaken for an real violent event.

In addition to screening using visual classifiers, platforms can harness new types of similarity models to stop reposts of videos confirmed to be harmful, even if those videos are adversarially altered or manipulated. If modified versions somehow bypass visual classification filters, these models can catch these videos based on visual similarity to the original version.   

In these cases, self-supervised training techniques expose the models to a range of image augmentation and manipulation methods, enabling them to accurately assess human perceptual similarity between image-based content. These visual similarity models can detect duplicates and close copies of the original image or video, including more heavily modified versions that would otherwise go undetected by hashing comparisons.

Unlike visual classifiers, these models do not look for specific visual subject matter in their analysis.  Instead, they quantify visual similarity on a spectrum based on overlap between abstract structural features. This means there’s no need to produce training data to optimize the model for every possible scenario or type of harmful content; detecting copies and modified versions of known content simply requires that the model accurately assess whether images or video come from the same source.

How it works: Deep Learning Models in Automated Content Moderation Systems

Using predictions from these deep learning models as a real-time signal offers a powerful way to proactively screen video content at scale. These model results can inform automated enforcement decisions or triage potentially harmful videos for human review. 

Advanced visual classification models can accurately distinguish between real and photorealistic animated weapons. Here are results from video frames containing both animated and real guns. 

To flag real graphic violence, automated moderation logic could combine confidence scores in actively held weapons, blood, and/or corpse classes but exclude more benign images like these examples. 

As a second line of defense, platforms need to be able to detect reposts or modified versions of known harmful videos from spreading.  To do this, platforms use predictions from pre-trained visual similarity models in the same way they use hash comparisons today. With an original version stored as a reference, automated moderation systems can perform a frame-wise comparison with any newly posted videos, flagging or removing new content that scores above a certain similarity threshold.  

In these examples, visual similarity models accurately predict that frame(s) in the query video are derived from the original reference, even under heavy augmentation. By screening new uploads against video content known to be graphic, violent, or otherwise harmful, these moderation systems can replace incomplete tools like hashing and audio comparison to more comprehensively solve the harmful content detection problem.

Final Thoughts: How Hive Can Help

No amount of technology can undo the harm caused by violent extremism in Buffalo or elsewhere.  We can, however, use new technology to mitigate the immediate and future harms of allowing hate-based violence to be spread in our online communities. 

Hive is proud to support the world’s largest and most diverse platforms in fulfilling their obligation to keep online communities safe, vibrant, and hopeful. We will continue to contribute towards state-of-the-art moderation solutions, and can answer questions or offer guidance to Trust & Safety teams who share our mission at support@thehive.ai.

BACK TO ALL BLOGS

Find Duplicated and Modified NFT Images with New NFT Search APIs

Contents

Why We Built the NFT Search API

Artists, technologists, and collectors have recently shown growing interest in non-fungible tokens (NFTs) as digital collectibles. With this surge in popularity, however, the red-hot NFT space has also become a prime target for plagiarism, copycats, and other types of fraud

While built-in blockchain consensus mechanisms are highly effective at validating the creation, transaction, and ownership of NFTs, these “smart contracts” are typically not large enough to store the files they represent. Instead, the token simply links to a metadata file with a public link to the image asset. So while the token on the blockchain is itself unique, the underlying image may not be.

Additionally, current blockchain technology has no way of understanding image content or the relationships between images. Hashing checks and other conventional methods cannot address the subjective and more complicated problem of human perceptual similarity between images.

Due to these technical limitations, the same decentralization that empowers creators to sell their work independently also enables bad actors to create copycat tokens with unlicensed or modified image assets. At a minimum, this puts less sophisticated NFT buyers at risk as they may be unable to tell the difference between original and stolen arts; beyond this, widespread duplication also undermines the value proposition of original tokens as unique collectibles. 

To help solve this problem, we are excited to offer NFT Search, a new API product built on a searchable index of major blockchain image assets and using Hive’s robust image similarity model.  

NFT Search makes an otherwise opaque dataset easily accessible, allowing marketplaces and other stakeholders to search existing NFT image assets for matches to query images, accurately identifying duplicates and modified copies. NFT Search has the potential to provide much-needed confidence across the NFT ecosystem to help accelerate growth and stability in the market.  

This post explains how our model works and the new API that makes this functionality accessible.

How Our Models Assess Similarity Between NFT Images

Hive’s NFT Search model is a deep vision image similarity model optimized for the types of digital art used in NFTs. To build this model, we used contrastive learning and other self-supervised techniques to expose a range of possible image augmentation methods. We then fine-tuned our notion of image similarity in order to account for a characteristic feature of NFTs: small, algorithmically-generated trait differences between images intended to be unique tokens.

The resulting model is targeted toward exact visual matches, but also resilient to manual manipulations and computer-generated variants that would bypass conventional hashing checks. 

To quantify visual similarity between a query image and existing NFT image assets, the model returns similarity scores normalized between 0 and 1 for each identified match. For a matching NFT image, a similarity score of 1.0 indicates that the query image is an exact duplicate of the matching image. Lower scores indicate that the query image has been modified or is otherwise visually distinct in some way. 

Building a Robust NFT Index for Broad Similarity Searches

Building a robust image comparison model was a necessary first step, but to make a NFT search solution useful we also needed to construct a near-complete set of existing NFT images as a reference set for broad comparisons. To do this, Hive crawls and indexes NFT images referenced on the Ethereum and Polygon blockchains in real-time, with support for additional blockchains in development. We also store identifying metadata from the associated tokens – including token IDs and URLs, contract addresses, and descriptors – to create a searchable “fingerprint” of each blockchain that enables comprehensive visual comparisons. 

Putting it all together: Example NFT Searches and Model Predictions

At a high level: when receiving a query image, our NFT model compares the query image against each existing NFT image in this dataset. The NFT Search API then returns a list of any identified matches, including links to the matching images and token metadata. 

To get a sense of NFT Search’s capabilities and how our scores align with human perceptual similarity, here’s a look at a few copycat tokens the model identified in recent searches: 

This is an example of an exact duplicate (similarity score 1.00): a copy of one of the popular Bored Ape Yacht Club arts minted on the Polygon blockchain. Because NFT Search compares the query image to Hive’s entire NFT dataset, it is able to identify matching images across multiple blockchains and token standards. 

Things get more interesting when we look for manually or programmatically manipulated variants at lower similarity scores. Take a look at the results from the search on another Bored Ape token, number 320: 

This search returned many matches, including several exact matches on both the Ethereum and Polygon blockchains. Here’s a look at other, non-exact matches it found:

  • Variant 1: A basic variant where the original Bored Ape 320 image is mirrored horizontally. This simple manipulation has little impact on the model’s similarity prediction. 
  • Variant 2 – “BAPP 320”: An example of a computer-manipulated copy on the Ethereum blockchain. The token metadata describes the augmented duplicate as an “AI-pixelated NFT” that is “inspired by the original BAYC collection.” Despite visual differences, the resulting image is structurally quite similar to the original, and our NFT model predicted accordingly (score = 0.94). 
  • Variant 3 – “VAYC 5228”: A slight variant located on the Ethereum blockchain. The matching image has a combination of Bored Ape art traits that does not exist in the original collection, but since many traits match, the NFT model still returns a relatively high similarity score (0.85). 
  • Variant 4 – These Apes Don’t Exist #274: Another computer-manipulated variant, but this one results in a new combination of Bored Ape traits and visible changes to the background. The token metadata, describes these as “AI-generated apes with hyper color blended visual traits imagined by a neural network.” Due to these clear visual and feature differences, this match yielded a lower similarity score (0.71)

NFT Search API: Response Object and Match Descriptions

Platforms integrate our NFT Search API response into their workflows to automatically submit queries when tokens are minted, listed for sale, or sold, and receive model prediction results in near-real time. 

The NFT Search API will return a full JSON response listing any NFTs that match the query image. For each match, the response object includes:

  • A link (URL or IPFS address) to the matching NFT image
  • A similarity score 
  • The token URL,
  • Any descriptive token metadata hosted at the token URL (e.g., traits and other descriptors), and
  • The unique contract address and token ID pair

To make the details of the API response more concrete, here’s the response object for the “BAPP 320” match shown above: 

"matches": [
    ...    
    {
        "url": "ipfs://QmY6RZ29zJ7Fzis6Mynr4Kyyw6JpvvAPRzoh3TxNxfangt/320.jpg",
        "token_id": "320",
        "contract_address": "0x1846e4EBc170BDe7A189d53606A72d4D004d614D",
        "token_url": "ipfs://Qmc4onW4qT8zRaQzX8eun85seSD8ebTQjWzj4jASR1V9wN/320.json",
        "image_hash": "ce237c121a4bd258fe106f8965f42b1028e951fbffc23bf599eef5d20719da6a",
        "blockchain": "ethereum", //currently, this will be either "ethereum" or "Polygon"
        "metadata":{
             "name": "Pixel Ape #320",
             "description": "**PUBLIC MINTING IS LIVE NOW: https://bapp.club.** *Become a BAPP member for only .09 ETH.* The BAPP is a set of 10,000 Bored Ape NFTs inspired by the original BAYC collection. Each colorful, AI-pixelated NFT is a one-of-a-kind collectible that lives on the Ethereum blockchain. Your Pixel Bored Ape also serves as your Club membership card, granting you access to exclusive benefits for Club members.",
             "image": "ipfs://QmY6RZ29zJ7Fzis6Mynr4Kyyw6JpvvAPRzoh3TxNxfangt/320.jpg",
             "attributes":[
             {
             //list of traits for NFT art if applicable 
             },
"similarity_score": 0.9463750907624477
    },
    ...
]

Aside from identifying metadata, the response object also includes a SHA256 hash of the NFT image currently hosted at the image URL. The hash value (and/or a hash of the query image) can be used to confirm an exact match, or to verify that the NFT image hosted at the URL has not been modified or altered at a later time. 

Final Thoughts

Authenticating NFTs is an important step forward in increasing trust between marketplaces, collectors, and creators who are driving the growth in this new digital ecosystem. We also recognize that identifying duplicates and altered copies within blockchains is just one part of a broader problem, and we’re currently hard at work on complementary authentication solutions that will expand our comparison scope from blockchains to the open web.

If you’d like to learn more about NFT Search and other solutions we’re building in this space, please feel free to reach out to sales@thehive.ai or contact us here.

BACK TO ALL BLOGS

Search Custom Image Libraries with New Image Similarity Models

Contents

Building a Smarter Way to Search

Hive has spent the last two years building powerful AI models served to customers via APIs. At their core, our current models – visual and text classification, logo detection, OCR, speech-to-text, and more – generate metadata that describes unstructured content. Hive customers use these “content tagging” models to unlock value across a variety of use-cases, from brand advertising analytics to automated content moderation.

While these content tagging models are powerful, some content understanding challenges require a more holistic approach. Meeting these challenges requires an AI model that not only understands a piece of content, but also sees how that content relates to a larger set of data.  

Here’s an example: a dating app is looking to moderate their user profile images. Hive’s existing content tagging APIs can solve a number of challenges here, including identifying explicit content (visual moderation), verifying age (demographics), and detecting spam (OCR).  But what if we also needed to detect whether or not a given photo matches (or is very similar to) another user’s profile? That problem would fall outside the scope of the current content tagging models. 

To meet these broader content understanding challenges, we’re excited to launch the first of our intelligent search solutions: Custom Search, an image comparison API built on Hive’s visual similarity models. With the Custom Search APIs, platforms can maintain individualized, searchable databases of images and quickly submit query images for model-based comparisons across those sets. 

This customizability opens up a wide variety of use-cases:

  • Detecting spam content: oftentimes, spammers on online platforms will use the same content or variants of the original content. By banning a single piece of content and using our custom search solution, platforms can now more extensively protect their users.
  • Detecting marketplace scams: identify potentially fraudulent listings based on photos that match or are similar to other listings
  • Detecting impersonation attempts: on social networks and dating apps, detect whether or not the same or similar profile images are being used across different accounts

This post will preview our visual similarity models and explore how to use Hive’s Custom Search APIs.

Image Similarity Models: A Two-Pronged Approach

More than other classification problems, the question of “image similarity” largely depends on definitions: at what point are two images considered similar or identical? To solve this, we used contrastive learning techniques to build two deep learning models with different but complementary ground-truth concepts of image similarity. 

The first model is optimized to identify exact visual matches between images – in other words: would a human decide that two images are identical upon close inspection? This “exact match” model is sensitive to even subtle augmentations or visual differences, where modifications can have a substantial impact on its similarity predictions.

The second model is optimized towards identifying manipulated images, and is more specifically trained on (manual) modifications such as overlay text, cropping, rotations, filters, and juxtapositions. In other words, is the query image a manipulated copy of the original, or are they actually different images?

Why Use Similarity Models for Image Comparison?

Unlike traditional image duplicate detection approaches, Hive’s deep learning approach to image comparison builds in resilience to image modification techniques, including both manual image manipulations via image editing software and adversarial augmentations (e.g., noise, filters, and other pixel-level alterations). By training on these augmentations specifically, our models can pick up modifications that would defeat conventional image hashing checks, even if those modifications don’t result in visible changes to the image.

Each model quantifies image similarity as a normalized score between 0 and 1. As you might expect, a pair-wise similarity score of 1.0 indicates an exact match between two images, while lower scores correspond to the extent of visual differences or modifications.  

Example Image Comparisons and Model Responses

To illustrate the problem and give a sense of our models’ understanding, here’s how they classify some example image pairs: 

This example is close to an exact match – each image is from the same video frame. Both models predict very high similarity scores (although not an exact visual match). However, the model predictions begin to diverge when we consider manipulated images:

Horizontal flip plus filter adjustments
Horizontal flip plus filter adjustments
Recoloration plus multiple mask overlay
Recoloration plus multiple mask overlay
Layered overlay text
Layered overlay text

In these examples, the exact match model shows significantly more sensitivity to visual differences, while the broader visual similarity model (correctly) predicts that one image is a manipulated copy of the other. In this way, scores from these models can be used in distinct but complementary ways to identify matching images in your image library. 

Hive’s Custom Search: API Overview

Custom Search includes three API endpoints: two for adding and removing images from individualized image libraries, and a third to submit query images for model-based comparison. 

For comparison tasks, the query endpoint allows images to be submitted for comparison to the library associated with your project. When a query image is submitted, our models will compare the image to each reference image in your custom index to identify visual matches. 

The Custom Search API will return a similarity score from both the exact visual match model and the visual similarity model on – like those shown in the above examples – for any matching images. Each platform can therefore adapt which of these scores to use (and at what threshold) based on their desired use-case. 

Final Thoughts

We’re excited about the ways that our new Custom Search APIs will enable customers to unlock useful insights in their search applications. For Hive, this represents the start of a new generation of enterprise AI that just scratches the surface of what is possible in this space.

If you’d like to learn more about Custom Search APIs or get help designing a solution tailored to your needs, you can reach out to our sales team here or by email at sales@thehive.ai