BACK TO ALL BLOGS

Announcing Hive’s Integration with NVIDIA NIM

Hive to Accelerate AI Adoption in Private Clouds and On-Prem Environments Using NVIDIA NIM

Contents

Hive is excited to announce the groundbreaking integration of our proprietary AI models with NVIDIA NIM. Our collaboration will allow, for the first time, Hive customers to deploy our industry-leading AI models in private clouds and on-premises data centers. We are also announcing that for the remainder of 2024, internet social platforms can receive up to 90 days of free trial access to our models. To learn more, check out the press release here.

The first Hive models to be made available with NVIDIA NIM are our AI-generated content detection models, which allow customers to identify AI-generated images, video, and audio. However, we plan to make additional models available through NVIDIA NIM in the coming months, including content moderation, logo detection, optical character recognition, speech transcription, custom models through Hive’s AutoML platform, and more.

Secure and Accelerated Deployments with NIM

Short for NVIDIA Inference Microservices, NIM provides models as optimized containers to prospective customers. This enables organizations to run AI models on NVIDIA GPUs on private clouds, workstations, and on-premises. NVIDIA NIM is part of the NVIDIA AI Enterprise software platform and connects the power of the Hive’s proprietary AI models, securely deployed on NVIDIA’s accelerated infrastructure, with enterprise customers everywhere. 

While Hive’s cloud-based APIs process billions of customer requests every month, among prospective customers’ top requests has been the ability to deploy Hive models in private clouds or on-premises. These are often enterprises whose strict data governance standards challenge the use of our cloud-based APIs. Our integration with NIM solves this challenge.

How Customers Use Our Leading AI Detection Models

Our AI-detection tools—the first Hive models to be made available with NVIDIA NIM—have been widely recognized as best-in-class, including by an independent research study from the University of Chicago. The researchers found that Hive’s model was the “clear winner” against both its automated competitors and highly-trained human experts in classifying images as either AI-generated or human-created.

With generative AI on the rise, Hive’s AI detection models have become crucial in combating the technology’s misuse. Here are select ways that customers use our models to protect themselves from the potential misuse of AI-generated and synthetic content.

Internet social platforms leverage our AI detection models to proactively screen content for the presence of AI-enabled misinformation in real time. Digital platforms can leverage our detections to provide transparency to their users by tagging content as AI-generated, or moderate potential misinformation by implementing sitewide bans.

Insurance companies use our models to automate the process of identifying AI-enabled fraud in evidence submitted with insurance claims. By scanning claims evidence for AI-generated augmentations, insurers can quickly, confidently and securely weed out fraud, saving significant cost from paying out fraudulent claims. 

Banks, brokers, and other financial institutions use our AI-generated content detection models to secure their user identification verification and KYC processes, leveraging Hive’s industry-leading AI-generated audio detection model to verify voice recognition workflows and prevent sophisticated financial fraud. 

Digital marketplaces use our models to automate the detection and moderation of fraudulent listings. Moreover, marketplaces protect their customers’ experience by verifying that both users and their product reviews are authentic.

Video conferencing and live streaming platforms integrate our AI detection models to authenticate video and audio in real time, preventing both impersonation and the misuse of likenesses.

While not all-encompassing, these are select ways that customers use our models today.

Managing the Risks of Generative AI

The increasing accessibility of Generative AI tools poses a newfound set of risks to companies and organizations. It can be difficult to moderate the proliferation of AI-generated content in a scalable, automated and secure way. We are proud to provide a solution that supports our customers in managing these risks, now made more accessible for enterprises to deploy on-premises or in private clouds with NVIDIA NIM.

If you’re interested in accessing Hive’s AI models through NVIDIA NIM, you can learn more on our website here or on NVIDIA’s website here. If you have any questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.

BACK TO ALL BLOGS

“Clear Winner”: Study Shows Hive’s AI-Generated Image Detection API is Best-in-Class

Contents

Navigating an Increasingly Generative World

To the untrained eye, distinguishing human-created art from AI-generated content can be difficult. Hive’s commitment to providing customers with API-accessible solutions for challenging problems led to the creation of our AI-Generated Image and Video Detection API, which classifies images as human-created or AI-generated. Our model was evaluated in an independent study conducted by Anna Yoo Jeong Ha and Josephine Passananti from the University of Chicago, which sought to determine who was more effective at classifying images as AI-generated: humans or automated detectors.

Ha and Passananti’s study addresses a growing problem within the generative AI space: As generative AI models become more advanced, the boundary between human-created art and AI-generated images has become increasingly indistinguishable. With such powerful tools being accessible to the general public, various legal and ethical concerns have been raised regarding the misuse of said technology.

Such concerns are pertinent to address because the misuse of generative AI models negatively impacts both society at large and the AI models themselves. Bad actors have used AI-generated images for harmful purposes, such as spreading misinformation, committing fraud, or scamming individuals and organizations. As only human-created art is eligible for copyright, businesses may attempt to bypass the law by passing off AI-generated images as human-created. Moreover, multiple studies (on both generative image and text models) have shown evidence that AI models will deteriorate if their training data solely consists of AI-generated content—which is where Hive’s classifier comes in handy.

The study’s results show that Hive’s model outperforms both its automated peers and highly-trained human experts in differentiating between human-created art versus AI-generated images across most scenarios. This post examines the study’s methodologies and findings, in addition to highlighting our model’s consistent performance across various inputs.

Structuring the Study

In the experiment, researchers evaluated the performance of five automated detectors (three of which are commercially available, including Hive’s model) and humans against a dataset containing both human-created and AI-generated images across various art styles. Humans were categorized into three subgroups: non-artists, professional artists, and expert artists. Expert artists are the only subgroup with prior experience in identifying AI-generated images.

The dataset consists of four different image groups: human-created art, AI-generated images, “hybrid images” which combine generative AI and human effort, and perturbed versions of human-created art. A perturbation is defined as a minor change to the model input aimed at detecting vulnerabilities in the model’s structure. Four perturbation methods are used in the study: JPEG compression, Gaussian noise, CLIP-based Adversarial Perturbation (which performs perturbations at the pixel level), and Glaze (a tool used to protect human artists from mimicry by introducing imperceptible perturbations on the artwork).

After evaluating the model on unperturbed imagery, the researchers proceeded to more advanced scenarios with perturbed imagery.

Evaluation Methods and Findings

The researchers evaluated the automated detectors on four metrics: overall accuracy (ratio of training data classified correctly to the entire dataset), false positive rate (ratio of human-created art misclassified as AI-generated), false negative rate (ratio of AI-generated images misclassified as human-created), and AI detection success rate (ratio of AI-generated images correctly classified as AI-generated to the total amount of AI-generated images).

Among automated detectors, Hive’s model emerged as the “clear winner” (Ha and Passananti 2024, 6). Not only does it boast a near-perfect 98.03% accuracy rate, but it also has a 0% false positive rate (i.e., it never misclassifies human art) and a low 3.17% false negative rate (i.e., it rarely misclassifies AI-generated images). According to the authors, this could be attributed to Hive’s rich collection of generative AI datasets, with high quantities of diverse training data compared to its competitors.

Additionally, Hive’s model proved to be resistant against most perturbation methods, but faced some challenges classifying AI-generated images processed with Glaze. However, it should be noted that Glaze’s primary purpose is as a protection tool for human artwork. Glazing AI-generated images is a non-traditional use case with minimal training data available as a result. Thus, Hive’s model’s performance with Glazed AI-generated images has little bearing on its overall quality.

Final Thoughts Moving Forward

When it comes to automated detectors and humans alike, Hive’s model is unparalleled. Even compared to human expert artists, Hive’s model classifies images with higher levels of confidence and accuracy.

While the study considers the model’s potential areas for improvement, it is important to note that the study was published in February 2024. In the months following the study’s publication, Hive’s model has vastly improved and continues to expand its capabilities, with 12+ model architectures added since.

If you’d like to learn more about Hive’s AI-Generated Image and Video Detection API, a demo of the service can be accessed here, with additional documentation provided here. However, don’t just trust us, test us: reach out to sales@thehive.ai or contact us here, and our team can share API keys and credentials for your new endpoints.