BACK TO ALL BLOGS How Hive is helping social platforms and BPOs manage emergent content moderation needs during the COVID-19 pandemic Social platforms face significant PR and revenue risks during the coronavirus crisis, challenged to maintain safe environments in the face of constrained human content moderation and insufficient in-house AI; Hive is using AI and its distributed workforce of 2 million contributors to help HiveMarch 23, 2020July 4, 2024 SAN FRANCISCO, CA (March 23, 2020) – The extraordinary measures taken worldwide to limit the spread of the coronavirus disease have disrupted the global economy, as businesses across industries scramble to adapt to a reality few were prepared for. In many cases, companies have stalled operations – with notable examples including airlines, movie theaters, theme parks, and restaurants among others. The disruption facing consumer technology companies like Google, Facebook, Twitter, and others is different. Engagement on social media platforms is unaffected, if not boosted, by the outbreak. However, underneath user trends are significant public relations and revenue risks if content moderation cannot keep up with the volume of user-generated content uploads. Hive, a San Francisco-based AI company, has emerged as a leader in helping platforms navigate the disruption through a combination of data labeling services at scale and production-ready automated content moderation models. Hive operates the world’s largest distributed workforce of humans labeling data, now more than 2 million contributors from more than 100 countries, and has been able to step in to support emergent content moderation data labeling needs as contract workforces of business process outsourcers (BPOs) have been forced to go on hiatus given their inability to work from home. Further, Hive’s suite of automated content moderation models have consistently and significantly outperformed capable models from top public clouds, and are being used by more than 15 leading platforms to reduce the volume of content required for human review. Context for the Disruption It is no secret that major social platforms employ tens of thousands of human content moderators to police uploaded content. These massive investments are made to maintain a brand safe environment and protect billions of dollars of ad revenue from marketers who are fast to act when things go wrong. Most of this moderation is done by contract workers, often secured through outsourced labor from firms like Cognizant and Accenture. Work from home mandates spurred by COVID-19 have disrupted this model, as most of the moderators are not allowed to work from home. Platforms have suggested that they will use automated tools to help fill the gap during the disruption, but they have also acknowledged that this is likely to reduce effectiveness and to result in slower response times than normal. How Hive is Helping Hive has emerged in a unique position to meet emergent needs from social media platforms. As BPOs have been forced to stand down onsite content moderation services, significant demand for data labeling has arisen. Hive has been able to meet these needs on short notice, mobilizing the world’s largest distributed workforce of humans labeling data, now more than 2 million contributors sourced from more than 100 countries. Hive’s workforce is paid to complete data labeling tasks through a consensus-driven workflow that yields high quality ground truth data. “As more people worldwide stay close to home during the crisis and face unemployment or furloughs, our global workforce has seen significant daily growth and unprecedented capacity,” says Kevin Guo, Co-Founder and CEO of Hive. Among data labeling service providers, Hive brings differentiated expertise to content moderation use cases. To date, Hive’s workforce has labeled more than 80 million human annotations for “not safe for work” (NSFW) content and more than 40 million human annotations for violent content (e.g. guns, knives, blood). Those preexisting job designs and workforce familiarity has enabled negligible job setup for new clients signed already this week. Platforms are also relying on Hive to reduce the volume of content required for human review through use of Hive’s automated content moderation product suite. Hive’s models – which span visual, audio, and text solutions – have consistently and significantly outperformed comparable models from top public clouds, and are currently helping to power content moderation solutions for more than fifteen of the top social platforms. Guo adds, “We have ample capacity for labeling and model deployment and are prepared to support the industry in helping to keep digital environments safe for consumers and brands as we all navigate the disruption caused by COVID-19.” For press inquiries, contact Kevin Guo, Co-Founder and CEO, at kevin.guo@thehive.ai.
BACK TO ALL BLOGS Hive Named to Fast Company’s Annual List of the World’s Most Innovative Companies for 2020 Hive has been named to Fast Company’s prestigious annual list of the World’s Most Innovative Companies for 2020 HiveMarch 10, 2020July 4, 2024 SAN FRANCISCO, CA (March 10, 2020) – Hive has been named to Fast Company’s prestigious annual list of the World’s Most Innovative Companies for 2020. The list honors the businesses making the most profound impact on both industry and culture, showcasing a variety of ways to thrive in today’s fast-changing world. This year’s MIC list features 434 businesses from 39 countries. “It’s an honor to be featured in Fast Company’s list of the Most Innovative Companies for 2020,” said Kevin Guo, Co-Founder and CEO of Hive. “This recognition follows a year of step-change growth in Hive’s business and team, and symbolizes our progress in powering practical AI solutions for enterprise customers across industries.” Hive is a full-stack AI company specialized in computer vision and deep learning, serving clients across industries with data labeling, model licensing, and subscription data products. During 2019, Hive grew to more than 100 clients, including 10 companies with market capitalizations exceeding $100 billion. At the core of Hive’s business, the company operates the world’s largest distributed workforce of humans labeling data – now boasting nearly 2 million registered contributors globally. Hive’s workforce hand-labeled more than 1.3 billion pieces of training data in 2019, inputs to a consensus-driven workflow that powers deep learning models with unparalleled accuracy compared to similar offerings from the largest public cloud providers. The company’s core models serve use cases including automated content moderation, logo and object detection, optical character recognition, voice transcription, and context classification. Across its models, Hive processed nearly 20 billion API calls in 2019. The company also operates Mensio, a media analytics platform developed in partnership with Bain & Company that integrates Hive’s proprietary TV content metadata on commercial airings and camera-visible sponsorship placements with third-party viewership and outcome datasets. Mensio is currently in use by leading TV network owners, brands, and agencies for competitive intelligence, media planning, and optimization. Fast Company’s editors and writers sought out the most groundbreaking businesses on the planet and across myriad industries. They also judged nominations received through their application process. The World’s Most Innovative Companies is Fast Company’s signature franchise and one of its most highly anticipated editorial efforts of the year. It provides both a snapshot and a road map for the future of innovation across the most dynamic sectors of the economy. “At a time of increasing global volatility, this year’s list showcases the resilience and optimism of businesses across the world. These companies are applying creativity to solve challenges within their industries and far beyond,” said Fast Company senior editor Amy Farley, who oversaw the issue with deputy editor David Lidsky. Fast Company’s Most Innovative Companies issue (March/April 2020) is now available online at fastcompany.com/most-innovative-companies/2020, as well as in app form via iTunes and on newsstands beginning March 17, 2020. The hashtag is #FCMostInnovative. About Hive Hive is an AI company specialized in computer vision and deep learning, focused on powering innovators across industries with practical AI solutions and data labeling, grounded in the world’s highest quality visual and audio metadata. For more information, visit thehive.ai. About Fast Company: Fast Company is the only media brand fully dedicated to the vital intersection of business, innovation, and design, engaging the most influential leaders, companies, and thinkers on the future of business. Since 2011, Fast Company has received some of the most prestigious editorial and design accolades, including the American Society of Magazine Editors (ASME) National Magazine Award for “Magazine of the Year,” Adweek’s Hot List for “Hottest Business Publication,” and six gold medals and 10 silver medals from the Society of Publication Designers. The editor-in-chief is Stephanie Mehta and the publisher is Amanda Smith. Headquartered in New York City, Fast Company is published by Mansueto Ventures LLC, along with our sister publication Inc., and can be found online at www.fastcompany.com.
BACK TO ALL BLOGS Updated Best-in-Class Automated Content Moderation Model Improved content moderation suite with additional subclasses; now performs better than human moderators HiveMarch 2, 2020July 4, 2024 The gold standard for content moderation has always been human moderators. Facebook alone reportedly employs more than 15,000 human moderators. There are critical problems with this manual approach – namely cost, effectiveness, and scalability. Headlines over recent months and years are scattered with high-profile quality issues – and, increasingly, press has covered significant mental health issues affecting full-time content moderators (View article from The Verge). Here at Hive, we believe AI can transform industries and business processes. Content moderation is a perfect example: there is an obligation on platforms to do this better, and we believe Hive’s role is to power the ecosystem in better addressing the challenge. We are excited to announce the general release of our enhanced content moderation product suite, featuring significantly improved NSFW and violence detections. Our NSFW model now achieves 97% accuracy and our violence model achieves 95% accuracy, considerably better than typical outsourced moderators (~80%), and even better than an individual Hive annotator (~93%). Deep learning models are only as good as the data they are trained on, and Hive operates the world’s largest distributed workforce of humans labeling data – now nearly 2 million contributors globally (our data labeling platform is described in further detail in an earlier article). In our new release, we have more than tripled the training data, built off of a diverse set of user-generated content sourced from the largest content platforms in the world. Our NSFW model is now trained on more than 80 million human annotations and our violence model trained on more than 40 million human annotations. Model Design We were selective in our construction of the training dataset, and strategically added the most impactful training examples. For instance, we utilized active learning to select training images where the existing model results were the most uncertain. Deep learning models produce a confidence score on input images which ranges from 0 (very confident the image is not in the class) to 1.0 (very confident the image is in the class). By focusing our labeling efforts on those images in the middle range (0.4 – 0.6), we were able to improve model performance specifically on edge cases. As part of this release, we also focused on lessening ambiguity in our ‘suggestive’ class in the NSFW model. We conducted a large manual inspection of images where either Hive annotators tended to disagree, or even more crucially, when our model results disagreed with consented Hive annotations. When examining images in certain ground truth sets, we noticed that up to 25% of disagreements between model prediction and human labels were due to erroneous labels, with the model prediction being accurate. Fixing these ground truth images was critical for improving model accuracy. For instance, in the NSFW model, we discovered that moderators disagreed on niche cases, such as which class leggings, contextually implied intercourse, or sheer clothing fell into. By carefully defining boundaries and relabeling data accordingly, we were able to teach the model the distinction in these classes, improving accuracy by as much as 20%. Classified as clean: Figure 1.1 – Updated examples of images classified as clean Classified as suggestive: Figure 1.2 – Updated examples of images classified as suggestive For our violence model, we noticed from client feedback that the classes of knives and guns included instances of these weapons that wouldn’t be considered cause for alarm. For example, we would flag the presence of guns during video games and the presence of knives when cooking. It’s important to note that companies like Facebook have publicly stated the challenge of differentiating between animated and real guns (View article on TechCrunch). In this release, the model now distinguishes between culinary knives and violent knives, and animated guns and real guns, by the introduction of two brand new classes to provide real, actionable alerts on weapons. Hive can now distinguish between animated guns and real guns: Figure 2 – Examples of animated guns The following knife picture is not considered violent anymore: Figure 3 – Examples of culinary knives Model Performance The improvement of our new models compared to our old models is significant. Our NSFW model was the first and most mature model we built, but after increasing training annotations from 58M to 80M, the model still improved dramatically. At 95% recall, our new model’s error rate is 2%, while our old model’s error rate was 4.2% – a decrease of more than 50%. Our new violence model was trained on over 40M annotations – a more than 100% increase over the previous training set size of 16M annotations. Performance also improved significantly across all classes. At 90% recall, our new model’s error rate decreased from 27% to 10% (a 63% decrease) for guns, 23% to 10% (a 57% decrease) for knives, and 34% to 20% (a 41% decrease) for blood. Over the past year, we’ve conducted numerous head-to-head comparisons vs. other market solutions, using both our held-out test sets as well as evaluations using data from some of our largest clients. In all of these studies, Hive’s models came out well ahead of all the other models tested. Figures 6 and 7 show data in a recent study conducted with one of our most prominent clients, Reddit. For this study, Hive processed 15,000 randomly selected images through our new model, as well as the top three public cloud players: Amazon Rekognition, Microsoft Azure, and Google Cloud’s Vision API. At recall 90%, Hive precision is 99%; public clouds range between 68 and 78%. This implies that our relative error rate is between 22x and 32x lower! The outperformance of our violence model is similarly significant. For guns, at recall 90%, Hive precision is 90%; public clouds achieve about 8%. This implies that our relative error rate is about 9.2x lower! For knives, at recall 90%, Hive precision is 89%; public clouds achieve about 13%. This implies that our relative error rate is about 7.9x lower! For blood, at recall 90%, Hive precision is 80%; public clouds range between 4 and 8%. This implies that our relative error rate is between 4.8x and 4.6x lower! Final Thoughts This latest model release raises the bar on what is possible from automated content moderation solutions. Solutions like this will considerably reduce the costs of protecting digital environments and limit the need for harmful human moderation jobs across the world. Over the next few months, stay tuned for similar model releases in other relevant moderation classes such as drugs, hate speech and symbols, and propaganda. For press or inquires, please contact Kevin Guo, Co-Founder and CEO (kevin.guo@thehive.ai)
BACK TO ALL BLOGS Hive + Bain Media Lab: Who Was Seen and What Was Said During the 2020 Academy Awards Next-day analysis highlights trends in measured exposure during Hollywood’s biggest night HiveFebruary 10, 2020July 4, 2024 Hollywood’s biggest night, the Academy Awards, wrapped up this year’s awards season in style. Red carpet fashion started the night and Parasite stole headlines after the Korean-language film claimed four awards including Best Picture. While awards are permanent markers of achievement, exposure is a broader prize shared by winners, nominees, performers, and presenters. Hive’s Celebrity Model, used by agencies to measure endorsement value and by media companies to enrich metadata in their video libraries, measured the screen time earned by the stars during the 2020 Oscars. Bong Joon Ho – who took the stage as a winner four times for Parasite – earned the most time on screen during last night’s telecast of the Academy Awards, according to data from Hive’s Celebrity Model (see Figure 1). The remainder of the top 10 was made up of winners (Joaquin Phoenix, Brad Pitt, Laura Dern), presenters (Steve Martin, Chris Rock, Kristin Wiig, Maya Rudolph), and some celebs wearing multiple hats (Elton John as a winner and performer; Taika Waititi as a winner and presenter). Source: Hive Celebrity Model Much was said leading up to the event about another host-less award show short on diversity. A diverse mix of presenters and performers aimed to compensate for nominees that skewed white and male. However, while these themes stole headlines leading up to the event and were scattered across acceptance speeches during the night, most of what was said during the show was relatively consistent year-over-year (see Figure 2). Source: Hive Speech-to-Text Model Hive’s Speech-to-Text model, with commercial uses including transcription of audio and monitoring of brand mentions in TV, radio, and digital video, was used to track mentions and keywords from within the Oscars’ telecast. Insights from what was said across award presentations and acceptance speeches included: Thanks were given more than 120 times and love was expressed more than 55 times – mostly to thematic groups including The Academy, parents, partners, children, and God, as well as casts and crewStatements on diversity and inclusion – spanning gender, race, and sexual orientation – were sprinkled throughout the night and were material in aggregateWomen, plural, were referenced as a group more than 3 times as often as men, most notably differentiated by messages of strength and unity (“all women are superheroes”)The presence of Black Panther and BlacKkKlansman in the 2019 Oscars drove more significant conversation on race during last year’s telecast, which was less in frequent this year although still referenced across multiple speeches (e.g., Matthew A. Cherry and Karin Rupert Toliver), award presentations (e.g., Chris Rock and Steve Martin), and performances (e.g., Janelle Monae)References to current events were scattered across the awards show, reflecting topics that impacted society over the past year including climate change and the environment, politics, and the death of Kobe BryantFor the second year in a row, Netflix earned the highest count of mentions by award recipients among media companies – even with just 2 of its 24 nominations resulting in wins About our models: Hive’s Celebrity Model is trained to identify more than 80,000 public figures in media content and uniquely leverages Hive’s distributed workforce of more than 1.5 million registered contributors to efficiently optimize the precision and recall of low confidence results. Commercial uses of the model include measurement of endorsement value by agencies and enrichment of metadata in media companies’ video libraries. Hive’s Speech-to-Text Model parses and transcribes speech data from video and audio content, and can be accessed via an API or on device. The model is trained by tens of thousands of hours of paired audio and speech data. Commercial uses of the model include transcription of audio and monitoring of brand mentions in TV, radio, and digital video. Kevin Guo is the cofounder and CEO of Hive and is based in San Francisco. Dan Calpin is President of Hive Media and a Senior Advisor with Bain & Company based in Los Angeles; he was a founding partner of Bain Media Lab. Laura Beaudin is a Bain partner in San Francisco and leads Bain’s Global Marketing Excellence practice. Andre James is a Bain partner in Los Angeles and leads Bain’s Global Media & Entertainment practice; he was a founding partner of Bain Media Lab.
BACK TO ALL BLOGS Bain Media Lab + Hive: How Brands Competed for Attention in Super Bowl LIV Super Bowl Sunday is more than a sporting event. Here are the highlights from next-day analysis of the commercials and sponsorships within TV advertising’s biggest event. HiveFebruary 3, 2020July 5, 2024 At a Glance: Next-day insights using Mensio, an AI-powered TV advertising and sponsorship analytics platform developed in partnership between Bain & Company and Hive, highlights insights from the commercials and sponsorships within TV advertising’s biggest event.League and broadcast sponsors again captured significant time on screen, with 9 brands achieving more than 1 minute of total screen time outside of commercials.Analysis of engagement with Super Bowl ads, using data from TVision, shows 2.6X higher eyes-on-screen attention during ads in the Super Bowl than ads during the NFL regular season, and 2.0X higher eyes-on-screen attention with the game itself and the sponsorship placements visible within it.Commercial minutes were led by advertisers also present in last year’s game – 25 companies representing 52% of national airtime in this year’s Super Bowl. Increased share of voice came from consumer goods advertisers, whereas financial services & insurance companies opted for a smaller advertising presence during the game.Advertisers increased the share of commercials featuring celebrities and greater diversity. Since winning their respective conference championships two weeks ago, the San Francisco 49ers and Kansas City Chiefs were heads down planning their schemes to achieve an on-field advantage in yesterday’s big game. For many months prior, brands and agencies were drawing up their own plays to break through on game day with memorable and viral creative. What did we learn? For the second year, Bain Media Lab and Hive have partnered to analyze marketing within and around the Super Bowl using Mensio, an AI-powered TV advertising and sponsorship analytics platform developed in partnership between Bain and Hive. The research relied on analysis of Mensio’s creative library, powered by metadata created using Hive’s proprietary computer vision models and Hive’s consensus-driven data labeling platform which leverages a distributed workforce of more than 1.5 million registered contributors. Sponsors capture significant Super Bowl screen time While Super Bowl ads may lead water cooler conversations this week, official league and broadcast sponsors achieved significant time on screen during yesterday’s Super Bowl through camera-visible signage, product placement and digital billboards in the telecast.Using Hive’s proprietary logo detection model, trained to automatically detect exposure for more than 4,000 brands with more than 200 million individual pieces of human-labeled training data, Bain Media Lab measured the quantity and quality of logo placements within the TV broadcast of the game and halftime show.While sponsorship placements don’t offer the sight-and-sound of traditional ad units, brands and their agencies are increasingly applying more quantitative rigor to understand the level and value of exposure that these activations deliver across platforms.Consistent with last year’s Super Bowl, the 3 most exposed brands were Nike, Bose, and Pepsi. Nike, the NFL’s uniform and on-field apparel supplier, logged more than 45 minutes of cumulative Super Bowl screen time with swooshes visible on uniforms, cleats and other sideline apparel. Bose, the league’s official headset provider, and Pepsi, which again sponsored the game’s halftime show, each totaled more than 3 minutes of cumulative screen time (see Figure 1).Among sponsors, Gatorade’s camera-visible exposure grew the most year-over-year, tallying 3 minutes and 12 seconds of exposure in Super Bowl LIV spread across bottles, cups, coolers and towels, surging from 1 minute and 20 seconds of time on screen during last year’s big game.In total, eleven brands surpassed 30 seconds of cumulative brand exposure within the Super Bowl LIV telecast (not including the pre-game show and excluding league, team and network brands).Among the top brands, Hard Rock, Amazon, and Pepsi achieved the highest average Brand Prominence Score, a proprietary measure of the size, clarity, centrality, and share of voice for a given exposure. Hard Rock, which holds stadium naming rights, earned its prominence through in-stadium signage whereas exposure for Amazon and Pepsi was highlighted by recurrent digital overlays on the telecast. Source: Mensio by Bain Media Lab and Hive Brand Prominence Score is a proprietary metric that reflects the size, clarity and location on the screen, as well as the presence of other brands or objects, measured every second …But Are People Really Watching? (Yes, They Really Are) The reported price of a 30 second Super Bowl spot in this year’s game rose to as much as $5.6 million, powered by continued demand from advertisers. While Super Bowl advertisements are objectively differentiated in their ability to reach a uniquely large live audience, many marketers have also long contended that Super Bowl ads reach a more engaged audience. In collaboration with TVision, a company focused on measuring how viewers engage with television content, we confirmed this hypothesis applying computer vision technology to viewing behaviors during this NFL season and yesterday’s finale. Compared to 2019 regular season NFL games, yesterday’s Super Bowl delivered a dramatically more engaged audience. The game itself delivered 2.0X more eyes-on-screen attention as a percentage of total viewership for the game itself, and the sponsorship exposure within it. Even more significant, commercials achieved 2.6X more eyes-on-screen attention than commercials during NFL regular season games (see Figure 2). Attention to Duration Index measures the proportion of total program / commercial time that the viewer is in the room with eyes on screen.Source: TVision in collaboration with Bain Media Lab and Hive; TVision Panel, 2019-20 NFL Season, P2+, Live and Same Day, NFL Game Broadcasts Only Our analysis highlighted two other interesting trends specific to this year’s Super Bowl commercials: Dedicated Advertisers Lead an Evolving Mix Super Bowl advertisements have become annual traditions for some companies – 22 advertisers representing 52% of national airtime in this year’s Super Bowl were also present during last year’s game, where they commanded 72% of national airtime. These included stalwarts like Anheuser-Busch, which led all advertisers in airtime in both Super Bowl LIII and Super Bowl LIV, this year spread across spots for Budweiser, Bud Light, and Michelob Ultra (see Figure 3). Super Bowl LIV also brought its share of new advertisers, with 48% of airtime coming from 25 new advertisers not present during Super Bowl LIII. Some brands were returning to the Super Bowl, such as The Hershey Company, which bought its first Super Bowl ad since 2008 to amplify awareness for the newly rebranded Reese’s Take5 bar. For others, this year marked a first Super Bowl commercial, including Facebook which promoted Facebook Groups. Other newcomers, ahead of the upcoming 2020 election, were the campaigns for President Trump and former New York Mayor Michael Bloomberg. Source: Mensio by Bain Media Lab and Hive The net effect resulted in a different mix of advertisers than the regular season and playoffs. Notably, consumer goods companies claimed 33% of airtime in Super Bowl LIV, compared to just 8% during the entirety of this year’s NFL regular season and playoffs. The category’s Super Bowl presence was led by multiple spots from Anheuser-Busch, Proctor & Gamble, and PepsiCo. Financial services and insurance shrank from 17% of airtime in the rest of the season to only 7% of Super Bowl LIV airtime, a result of several top advertisers placing ads in the pregame show or taking the game off altogether (see Figure 4). Source: Mensio by Bain Media Lab and Hive Advertisers Add Celebrities, Greater Diversity What brands choose to say on TV’s largest stage is often reflective of trends and inflection points in our culture and society. Sometimes, this is explicit – with Super Bowl ads introducing us to the cars we will be driving, the movies we will be watching, and the food and drinks we will be consuming in the years ahead. 40% of this year’s Super Bowl ads introduced new products, roughly constant year-over-year. More nuanced is the study of trends in casting, based on analysis of creative metadata generated through a combination of Hive’s computer vision models as well as Hive’s consensus-driven data labeling platform which leverages a distributed workforce of more than 1.5 million registered contributors. Cast analysis shows advertisers continuing to support the zeitgeist surrounding gender equality and diversity & inclusion. Women were present in 90% of spots this year, up from 74% last year. Similarly, 82% of spots this year included people from more diverse backgrounds compared 64% of spots last year (see Figure 5). The most significant increase in casting this year was a surge in spots featuring actors and actresses, musicians, and other public figures, featured in 65% of Super Bowl LIV ads compared to just 36% of spots last year. Source: Mensio by Bain Media Lab and Hive A resurgent NFL season is now complete. The bounce back in viewership versus the 2018 season yielded a sigh of relief for the league and its broadcast partners, and further validated the continued role of the NFL in the TV advertising landscape. However, the Super Bowl is not the only tentpole TV advertising event this month. Next Sunday, brands will be on stage again, this time targeting the premium audience watching The Oscars on ABC. Dan Calpin is President of Hive Media and a Senior Advisor with Bain & Company based in Los Angeles; he was a founding partner of Bain Media Lab. Laura Beaudin is a Bain partner in San Francisco and leads Bain’s Global Marketing Excellence practice. Andre James is a Bain partner in Los Angeles and leads Bain’s Global Media & Entertainment practice; he was a founding partner of Bain Media Lab. Sharona Sankar-King is a partner with Bain & Company based in New York and a senior leader in Bain’s Customer Strategy & Marketing practice. Hive is an AI company specialized in computer vision and deep learning, focused on powering innovators across industries with practical AI solutions and data labeling. For more information, visit thehive.ai. TVision is a TV performance metrics company focused on measuring how viewers engage with television content. For more information, visit www.tvisioninsights.com. Note: Published Bain Media Lab research relies solely on third-party data sources and is independent of any data or input from clients of Bain & Company.
BACK TO ALL BLOGS Hive’s Presentation at Plug and Play’s Media & Advertising Innovation Summit HiveOctober 25, 2019July 4, 2024 Dan Calpin, President of Hive Media, shares an overview of Hive and our media business at the 2019 Plug and Play Fall Innovation Summit in Sunnyvale, CA.