BACK TO ALL BLOGS Hive Joins in Endorsing the NO FAKES Act HiveApril 9, 2025 Contents The NO FAKES ActDetecting AI-Generated and Deepfake Content Today, Hive joins other leading technology companies and trade organizations in endorsing the NO FAKES Act — a bipartisan piece of legislation aimed at addressing the misuse of generative AI technologies by bad actors. The legislation has been introduced by U.S. Senators Marsha Blackburn (R-Tenn.), Chris Coons (D-Del.), Thom Tillis (R-N.C.), and Amy Klobuchar (D-Minn.), along with U.S. Representatives Maria Salazar (R-Fla.) and Madeleine Dean (D-Penn.). Read the full letter here. The NO FAKES Act The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2025 is a bipartisan bill that would protect the voice and visual likeness of all individuals from unauthorized recreations by generative artificial intelligence. This Act, aimed at addressing the use of non-consensual digital replications for audiovisual works or sound recordings, will hold individuals or companies liable for the production of such content and hold platforms liable for knowingly hosting such content. As a leading provider of AI solutions to hundreds of the world’s largest and most innovative organizations, Hive understands firsthand the extraordinary benefits that generative AI technology provides. However, we also recognize that bad actors are relentless in their attempts to exploit it. As Kevin Guo, Hive’s CEO and Cofounder, explains in the endorsement letter: “The development of AI-generated media and AI detection technologies must evolve in parallel,” said Kevin Guo, CEO and cofounder of Hive. “We envision a future where AI-generated media is created with permission, clearly identified, and appropriately credited. We stand firmly behind the NO FAKES Act as a fundamental step in establishing oversight while keeping pace with advancements in artificial intelligence to protect public trust and creative industries alike.”https://www.blackburn.senate.gov/2025/4/technology/blackburn-coons-salazar-dean-colleagues-introduce-no-fakes-act-to-protect-individuals-and-creators-from-digital-replicas To this end, Hive has commercialized AI-powered solutions to help digital platforms proactively detect the potential misuse of AI-generated and synthetic content. Detecting AI-Generated and Deepfake Content Hive’s AI-generated and deepfake detection models can help technology companies identify unauthorized digital replications of audiovisual likeness in order to comply with the provisions outlined in the NO FAKES Act. The endorsement letter references the high-profile example of the song “Heart on My Sleeve,” featuring unauthorized AI-generated replicas of the voices of Drake and The Weeknd, which was played hundreds of thousands of times before being identified as fake. Streaming platforms and record labels will be able to leverage Hive’s AI-Generated Music model to proactively detect such instances of unauthorized recreations and swiftly remove them. While the harmful effects of unauthorized AI-generated content go far beyond celebrities, Hive also offers a Celebrity Recognition API, which detects the visual likeness of a broad index of well known public figures, from celebrities and influencers to politicians and athletes. Hive’s Celebrity Recognition API can help platforms proactively identify bad actors misusing celebrity visual likeness to disseminate false information or unauthorized advertisements, such as the recent unauthorized synthetic replica of Tom Hanks promoting a dental plan. Hive’s AI-generated and deepfake detection solutions are already trusted by the United States Department of Defense to combat sophisticated disinformation campaigns and synthetic media threats. For more information on Hive’s AI-Generated and Deepfake Detection solutions, reach out to sales@thehive.ai or visit: https://thehive.ai/apis/ai-generated-content-classification
BACK TO ALL BLOGS Streamline CSAM Reports with Moderation Dashboard’s NCMEC Integration HiveFebruary 26, 2025April 9, 2025 Contents Ensuring Child Safety OnlineIntegration WorkflowNCMEC Report Contents Hive is excited to announce that we have integrated the National Center for Missing & Exploited Children’s (NCMEC) CyberTipline into Moderation Dashboard, streamlining the process of submitting child sexual abuse material (CSAM) reports. This feature is now available to all Moderation Dashboard customers with valid NCMEC credentials. Ensuring Child Safety Online The National Center for Missing & Exploited Children is a non-profit organization dedicated to protecting children from all forms of exploitation and abuse. All electronic communication service providers are required under U.S. federal law to report any known CSAM on their platforms to NCMEC’s CyberTipline—a centralized system for receiving and processing CSAM reports. These reports are later shared with law enforcement and relevant service providers so they can take further action. Throughout our endeavors and partnerships, Hive’s commitment to online safety has been unwavering. We built this integration to help automate the reporting process, simplify our customers’ workflows, and ensure that their platforms can comply with applicable law. Integration Workflow A step-by-step sample integration workflow is outlined, starting from when a user uploads an image to the platform and ending in the subsequent actions a moderator can take. For a more detailed guide on how the reporting process works, refer to the following documentation. A user uploads an image to the platform.The image is processed by Hive’s proprietary CSAM Detection API, powered by Thorn—a leading nonprofit that builds technology to defend children from sexual abuse. To learn more about our Thorn partnership, read our blog posts linked below:Matching Against CSAM: Hive’s Innovative Integration with Thorn’s Safer MatchExpanding Our CSAM Detection APIIf there is a likelihood of CSAM detected in the image, this image will surface as a link in the CSAM Review Feed. Once the link is clicked, the media will open in a new browser tab for the moderator to review. Moderation Dashboard will never display CSAM content directly within the Review Feed.From the review feed, the moderator can take two actions:Perform an enforcement action (e.g. banning the user or deleting the post). A webhook is sent to the customer’s server afterward, containing the moderator’s chosen enforcement action as well as the post and user metadata, all of which are used to take the content down.The system will automatically create a report, which the moderator can send to NCMEC by clicking the “Submit” button within the Review Feed. After the report is submitted, the system creates an internal log to track the report (e.g. submission date and time, as well as storing the response from NCMEC). “Report to NCMEC” button within Review Feed NCMEC Report Contents Customers can pre-fill information fields that are constant across reports. These fields will be automatically populated for each report, reducing effort on the customer’s end. To provide our customers with full transparency, the report sent to NCMEC includes: the moderator’s information, the company’s information, the potential CSAM content, and the incident date and time. Moderator information fields If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
BACK TO ALL BLOGS Super Bowl LIX – As Seen By AI HiveFebruary 10, 2025February 26, 2025 Next-day insights on the latest trends in marketing and culture, powered by Hive’s AI models. For more detailed analytics, download the full report below. Download Full Report Key Insights Were They Here Last Year? Brands not active during last year’s Super Bowl made up 51% of the airtime for nationally-televised ads during this year’s Big Game. Meet My Famous Friends Celebrity integration into Super Bowl commercials has become a cornerstone of the creative decisions for many brands. This year was no different, with 60% of ads featuring at least 1 celebrity, an increase from 50% last year. Actors and actresses continue to be the most common type of celebrity to be cast in Super Bowl commercials. EVs Unplugged This year’s Super Bowl featured the lowest count (1) and lowest percentage of auto ads (50%) referencing electric vehicles since at least 2020. A Part Of The Game 27 brands earned more than 5 seconds of screen time within the game and postgame telecast (excluding pregame and commercials), totaling almost two hours of cumulative screentime worth $247.8M in equivalent media value. Request a Demo
BACK TO ALL BLOGS Hive to be Lead Sponsor of Trust & Safety Summit 2025 HiveFebruary 5, 2025March 17, 2025 We are thrilled to announce that Hive is the lead sponsor of the Trust & Safety Summit 2025. As Europe’s premier Trust & Safety conference, this summit is designed to empower T&S leaders to tackle operational and regulation challenges, providing them with both actionable insights and future-focused strategies. The summit will be held Tuesday, March 25th and Wednesday, March 26th at the Hilton London Syon Park, UK. The 2-day event will explore themes such as regulatory preparedness, scaling trust and safety solutions, and best practices for effective content moderation. An incredible selection of programming will include expert-led panels, interactive workshops and networking events. Hive’s CEO Kevin Guo will deliver the keynote presentation on “The Next Frontier of Content Moderation”, covering topics like multi-modal LLMs and detecting AI generated content. Additionally, Hive will host two panels during the event: Hyperscaling Trust & Safety: Navigating Growth While Maintaining Integrity. Hive will be discussing best practices for scaling trust & safety systems for online platforms experiencing hypergrowth.Harnessing AI to Detect Unknown CSAM: Innovations, Challenges, and the Path Forward. Hive will be joined by partners Thorn and IWF to discuss recent advancements in CSAM detection solutions. As the lead sponsor of the T&S Summit 2025, we are furthering our commitment to making the internet a safer place. Today, Hive’s comprehensive moderation stack empowers Trust & Safety teams of all sizes to scale their moderation workflows with both pre-trained and customizable AI models, flexible LLM-based moderation, and a moderation dashboard for streamlined enforcement of policies. We look forward to welcoming you to the Trust & Safety Summit 2025. If you’re interested in attending the conference, please reach out to your Hive account manager or sales@thehive.ai. Prospective conference attendees can also find more details and ticket information here. For a detailed breakdown of summit programming, download the agenda here. To learn more about what we do at Hive, please reach out to our sales team or contact us here for further questions.
BACK TO ALL BLOGS Protecting Children’s Online Safety with Internet Watch Foundation HiveJanuary 23, 2025February 25, 2025 Contents Our Joint Commitment to Child SafetyIntegration with Thorn Safer Match Hive is proud to announce that we are partnering with Internet Watch Foundation (IWF), a non-profit organization working to stop child sexual abuse online. We will be integrating their proprietary keyword and URL lists into our default Text Moderation model for all customers at no additional cost. Our Joint Commitment to Child Safety Making the internet a safer place is one of Hive’s core values. Our partnership with IWF allows us to use their specialized knowledge to bolster our leading content moderation tools, helping our customers better detect and flag online records of child sexual abuse. As part of our partnership, Hive will now include the following two IWF wordlists as part of our default Text Moderation model for all customers at no additional cost: Keyword List: This wordlist contains known terms and code words that offenders use to exchange child sexual abuse material (CSAM) in a discreet manner. More information can be found here.URL List: This wordlist contains a comprehensive list of webpages that are confirmed to host CSAM in image or video form. More information can be found here. With these lists, customers can now use Text Moderation to catch various keywords and URLs associated with CSAM. These lists are dynamic and will be updated on a daily basis. A sample Text Moderation response can be found below. We recommend that all customers perform an initial evaluation to first determine if the list’s keywords are helpful for their specific use case. For more information, refer to the following documentation. Integration with Thorn Safer Match Our partnership also grants us access to IWF’s hash lists. Previously, we partnered with Thorn, allowing customers to integrate their Safer Match hash matching technology for CSAM detection using Hive APIs. We can now match against IWF’s hash lists with Thorn Safer Match. If you want this feature supported, please reach out to our sales team (sales@thehive.ai). By combining our leading moderation tools with IWF’s specialized expertise, we hope that we can create a safer internet for children worldwide. For more details, you can find our recent press release here, as well as our CEO Kevin Guo’s interview with Rashi Shrivastava of Forbes here. If you’re interested in learning more about what we do, please reach out to our sales team or contact us here for further questions.
BACK TO ALL BLOGS Expanding our Moderation APIs with Hive’s New Vision Language Model HiveDecember 23, 2024February 21, 2025 Contents An Introduction to VLMs and Moderation 11BPotential Use CasesExpanding Moderation Hive is thrilled to announce that we’re releasing Moderation 11B Vision Language Model. Fine-tuned on top of Llama 3.2 11B Vision Instruct, Moderation 11B is a new vision language model (VLM) that expands our established suite of text and visual moderation models. Building on our existing capabilities, this new model offers a powerful way to handle flexible and context-dependent moderation scenarios. An Introduction to VLMs and Moderation 11B Vision language models (VLMs) are models that can learn from image and text inputs. This ability to simultaneously process inputs across multiple modalities (e.g. images and text) is known as multimodality. While VLMs share similar functions with large language models (LLMs), traditional LLMs cannot process image inputs. With Moderation 11B VLM, we leverage unique multimodal capabilities to extend our existing moderation tool suite. Beyond its multimodality, Moderation 11B VLM can incorporate additional contextual information, which is not possible with our traditional classifiers. The model’s baked-in knowledge, combined with insights trained from our classifier dataset, enables a more comprehensive approach to moderation. Moderation 11B VLM is trained on all 53 public heads of our Visual Moderation system, recognizing content across distinct categories such as sexual content, violence, drugs, hate, and more. Because of these enhancements, it becomes a valuable addition to our existing Enterprise moderation classifiers, helping to capture a wide range of flexible and alternative cases that can arise in dynamic workflows. Potential Use Cases Moderation 11B VLM applies to a broad range of use cases, notably surpassing Llama 3.2 11B Vision Instruct in identifying contextual violations and handling unseen data in our internal tests. Below are some potential use cases where our model performs well: Contextual violations: Cases where individual inputs alone may not be flagged as violations, but all inputs contextualized together makes it one. For example, a text message could appear harmless on its own, yet the preceding conversation context reveals it to be a violation.Multi-modal violations: Situations where both text and image inputs are important. For instance, analyzing a product image alongside its description can uncover violations that single-modality models would miss.Unseen data: Inputs that the model has not previously encountered. For example, customers may use Moderation 11B VLM to ensure that user content aligns with newly introduced company policies. Below are graphical representations of how our fine-tuned Moderation 11B model performed in our internal testing compared to the Llama 3.2 11B Vision Instruct model. We assessed their respective F1 scores, a metric that combines both precision and recall. The F1 score was computed using the standard formula: F1 = 2 * (precision * recall) / (precision + recall). In our internal evaluation, we tasked both our Moderation 11B VLM and Llama 3.2 11B Vision Instruct with learning the classification guidelines outlined in our public Visual Moderation documentation. These guidelines were then used to evaluate a randomly selected sizable sample dataset of images from our proprietary Visual Moderation dataset, which has highly accurate hand-labeled ground truth classifications. This dataset also included diverse and challenging content types from each of our visual moderation heads, such as sexual intent, hate symbols and self harm. While Moderation 11B VLM’s performance demonstrates its ability to generalize well within the scope of these content classes, it is important to note that results may vary depending on the composition of external datasets Expanding Moderation With Moderation 11B VLM’s release, we hope to meaningfully and flexibly broaden the range of use cases our moderation tools can handle. We’re excited to see how this model assists with your moderation workflows, especially when navigating complex scenarios. Anyone with a Hive account can access our API playground here to try Moderation 11B VLM directly from the user interface. Below are two examples of Moderation 11B VLM requests and responses. For more details, please refer to the documentation here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
BACK TO ALL BLOGS Announcing Hive’s Partnership with the Defense Innovation Unit HiveDecember 5, 2024February 21, 2025 Contents Combating Synthetic Media and DisinformationOur Cutting-Edge ToolsForging a Path Forward Hive is excited to announce that we have been awarded a Department of Defense (DoD) contract for deepfake detection of video, image, and audio content. This groundbreaking partnership marks a significant milestone in protecting our national security from the risks of synthetic media and AI-generated disinformation. Combating Synthetic Media and Disinformation Rapid strides in technology have made AI manipulation the weapon of choice for numerous adversarial entities. For the Department of Defense, a digital safeguard is necessary in order to protect the integrity of vital information systems and stay vigilant against the future spread of misinformation, threats, and conflicts at a national scale. Hive’s reputation as frontline defenders against AI-generated deception makes us uniquely equipped to handle such threats. Not only do we understand the stakes at hand, we have been and continue to be committed to delivering unmatched detection tools that can mitigate these risks with accuracy and speed. Under our initial two-year contract, Hive will partner with the Defense Innovation Unit (DIU) to support the intelligence community with our state-of-the-art deepfake detection models, deployed in an offline, on-premise environment and capable of detecting AI-generated video, image, and audio content. We are honored to join forces with the Department of Defense in this critical mission. Our Cutting-Edge Tools To best empower the U.S. defense forces against potential threats, we have provided five proprietary models that can detect whether an input is AI-generated or a deepfake. If an input is flagged as AI-generated, it was likely created using a generative AI engine. Whereas, a deepfake is a real image or video where one or more of the faces in the original image has been swapped with another person’s face. The models we’ve provided are, as follows: AI-Generated Detection (Image and Video), which detects if an image or video is AI-generated.AI-Generated Detection (Audio), which detects if an audio clip is AI-generated.Deepfake Detection (Image), which detects if an image contains one or more faces that are deepfaked.Deepfake Detection (Video), which detects if a video contains one or more faces that are deepfaked.Liveness (Image and Video), which detects whether a face in an image or video is primary (exists in the primary image) or secondary (exists in an image, screen, or painting inside of the primary image). Forging a Path Forward Even as new threats continue to emerge and escalate, Hive continues to be steadfast in our commitment to provide the world’s most capable AI models for validating the safety and authenticity of digital content. For more details, you can find our recent press release here and the DIU’s press release here. If you’re interested in learning more about what we do, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
BACK TO ALL BLOGS Model Explainability With Text Moderation HiveDecember 2, 2024February 21, 2025 Contents The Need For ExplainabilityA Supplement to Our Text Moderation ModelComprehensive Language Support Hive is excited to announce that we are releasing a new API: Text Moderation Explanations! This API helps customers understand why our Text Moderation model assigns text strings particular scores. The Need For Explainability Hive’s Text Moderation API scans a text-string or message, interprets it, and returns to our users a score from 0-3 mapping to a severity level across a number of top level classes and dozens of languages. Today, hundreds of customers send billions of text strings each month through this API to protect their online communities. A top feature request has been explanations for why our model assigns the scores it does, especially for foreign languages. While some moderation scores may be clear, there also may be ambiguity around edge cases for why a string was scored the way it was. This is where our new Text Moderation Explanations API comes in—delivering additional context and visibility into moderation results in a scalable way. With Text Moderation Explanations, human moderators can quickly interpret results and utilize the additional information to take appropriate action. A Supplement to Our Text Moderation Model Our Text Moderation classes are ordered by severity, ranging from level 3 (most severe) to level 0 (benign). These classes correspond to the possible scores Text Moderation can give a text string. For example: If a text string falls under the “sexual” head and contains sexually explicit language, it would be given a score of 3. The Text Moderation Explanations API takes in three inputs: a text string, its class label (either “sexual”, “bullying”, “hate”, or “violence”), and the score it was assigned (either 3, 2, 1, or 0). The output is a text string that explains why the original input text was given that score relative to its class. It should be noted that Explanations is only supported for select multilevel heads (corresponding to the class labels listed previously). To develop the Explanations model, we used a supervised fine-tuning process. We used labeled data—which we internally labeled at Hive using native speakers—to fine-tune the original model for this specialized process. This process allows us to support a number of languages apart from English. Comprehensive Language Support We have built our Text Moderation Explanation API with broad initial language support. Language support solves the crucial issue of understanding why a text string (in one’s non-native language) was scored a certain way. We currently support eight different languages for Text Moderation Explanations and four top level classes: Text Moderation Explanations are now included at no additional cost as part of our Moderation Dashboard product, as shown below: Additionally, customers can also access the Text Moderation Explanations model through an API (refer to the documentation). In future releases, we anticipate adding further language and top level class support. If you’re interested in learning more or gaining test access to the Text Moderation Explanations model, please reach out to our sales team (sales@thehive.ai) or contact us here for further questions.
BACK TO ALL BLOGS Expanding Our CSAM Detection API HiveNovember 21, 2024February 21, 2025 Contents Our Commitment to Child Internet SafetyHow the Classifier WorksBuilding a Safer Internet We are excited to announce that Hive is now offering Thorn’s predictive technology through our CSAM detection API! This API now enables customers to identify novel cases of child sexual abuse material (CSAM) in addition to detecting known CSAM using hash-based matching. Our Commitment to Child Internet Safety At Hive, making the internet safer is core to our mission. While our content moderation tools help reduce human exposure to harmful content across many categories, addressing CSAM requires specialized expertise and technology. That’s why we’re expanding our existing partnership with Thorn, an innovative nonprofit that builds technology to defend children from sexual abuse and exploitation in the digital age. Until now, our integration with Thorn focused on hash-matching technology to detect known CSAM. The new CSAM detection API builds on this foundation by adding advanced machine learning capabilities that can identify previously unidentified CSAM. By combining Thorn’s industry-leading CSAM detection technology with Hive’s comprehensive content moderation suite, we provide platforms with robust protection against both known and newly created CSAM. How the Classifier Works The classifier works by first generating embeddings of the uploaded media. An embedding is a list of computer-generated scores between 0 and 1. After generating the embeddings, Hive permanently deletes all of the original media. We then use the classifier to determine whether the content is CSAM based on the embeddings. This process ensures that we do not retain any CSAM on our servers. The classifier returns a score between 0 and 1 that predicts whether a video or image is CSAM. The response object will have the same general structure for both image and video inputs. Please note that Hive will return both results together: probability scores from the classifier and any match results from hash matching against the aggregated hash database. For a detailed guide on how to use Hive’s CSAM detection API, refer to the documentation. Building a Safer Internet Protecting platforms from CSAM demands scalable solutions. The problem is complex; but our integration with Thorn’s advanced technology provides an efficient way to detect and stop CSAM, helping to safeguard children and build a safer internet for all. If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.
BACK TO ALL BLOGS Announcing General Availability of Hive Models HiveOctober 4, 2024February 21, 2025 Contents Hive Proprietary ModelsAdditional Model OfferingsHow to Create a Project We are excited to announce that we are making select proprietary Hive models and popular open-source generative models directly accessible for customers to deploy and integrate into their workflows. Starting today, customers can now create projects by themselves with just a few clicks. Hive Proprietary Models We have made select proprietary Hive models accessible to customers across our Understand and Search model categories, ranging from our Celebrity Recognition API to our Speech-to-Text model. For a full list of generally available models, see our pricing page here. Additional Model Offerings We currently offer a variety of open-source image generation models and large language models (LLMs) that customers can directly access themselves. For image generation models, we have four different options available today, with additional models being served in the coming weeks: SDXL (Stable Diffusion XL), SDXL Enhanced, Flux Schnell, and Flux Schnell Enhanced. SDXL Enhanced and Flux Schnell Enhanced are Hive’s enhanced versions of the aforementioned base models, served exclusively to our customers. The differences are outlined in the table below. SDXL (Stable Diffusion XL)Latent diffusion text-to-image generation model produced by Stability AI. Trained on a larger dataset than the base model, with a larger UNet enabling better generation.SDXL EnhancedHive’s enhanced version of SDXL, served exclusively to our customers. Tailored toward a photorealistic and refined art style with extreme detail.Flux SchnellFlux’s fastest model in their suite of text-to-image models, capable of generating images in 4 or fewer steps. Best suited for local development and personal use.Flux Schnell EnhancedHive’s enhanced version of Flux Schnell that is trained on our proprietary data and retains the base model’s speed and efficiency, served exclusively to our customers. Generates images across a wide range of artistic styles with a specialization in photorealism, leading to high levels of customer satisfaction based on past user studies. For LLMs, we have a selection of Meta’s Llama models from their Llama 3.1 and 3.2 series available now. The differences are outlined in the table below. Llama 3.1 8B InstructLlama 3.1 8B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (70B).Llama 3.1 70B InstructLlama 3.1 70B Instruct is a multilingual, instruction-tuned text-only model. Compared to other available open source and closed chat models, Llama 3.1 instruction-tuned text-only models achieve higher scores across common industry benchmarks. We provide this model in one additional size (8B).Llama 3.2 1B InstructLlama 3.2 1B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (3B).Llama 3.2 3B InstructLlama 3.2 3B Instruct is a lightweight, multilingual, instruction-tuned text-only model that fits onto both edge and mobile devices. Use cases where the model excels include summarizing or rewriting inputs, as well as instruction following. We provide this model in one additional size (1B). We plan to make more models available for direct use in the coming months. How to Create a Project Creating new projects has never been easier. To get started, go to thehive.ai and click on the “Go to Dashboard” button in the top-right corner. Home Page If you are not logged in, the “Go to Dashboard” button will redirect you to the sign in page. Then, either sign in to an existing account or click the blue “Sign up” hyperlink at the bottom of the page to sign up for a new account. Sign In Page You will receive an email to verify your account after signing up. After you’ve either logged into an existing account or verified your new account, you will be redirected to the main dashboard. For new accounts, a new organization named “(User Name)’s personal organization” will be automatically created. Your current organization will be visible in the top-right corner. Before you can submit tasks, you will need to accept the Terms of Use and add credits to your account. To accept the Terms of Use, click the “View Terms and Conditions” button at the bottom of the page. You will need to do this for every additional organization you create. Main Dashboard To add funds to your credit balance, locate the “Billing” section in the bottom-left corner of the dashboard and click the blue “Add Credit” button, which will redirect you to another page where you can add a payment method. Billing Add Payment Method Now you’re ready to create your own projects. On any page, click on the “Products” tab on the left side of the header. From the dropdown menu that appears, select “Models.” It will redirect you to the following page, where you can view all of your current projects. To create a new project, click on the plus (+) sign next to “Projects” on the top-left side of the screen. You will be redirected to the following page, where you can choose your project type. Select “Hive Models.” Project Types Then, you will be redirected to another page containing our available models. Click to select the desired model for your project. Project Format After selecting your desired model, you will need to configure your project. Change your project’s name using the text box below. Once you hit the blue “Create” button, your project will be live. Project Configure Upon project creation, you will be redirected to the following interface. Here, you can view your API key by clicking the “API Keys” Button on the top right. Project Interface Using this API key, you can call the API by making a cURL request in your terminal. To interpret the results, please refer to our documentation and look up the relevant model and its class definitions. Sample cURL Request and Result For pricing details, please reference our model pricing table here. If you run into any issues building your projects, please feel free to reach out to us at support@thehive.ai and we will be happy to help. If you have any further questions or would like to learn more, please reach out to sales@thehive.ai or contact us here.