BACK TO ALL BLOGS

How to Train Models with Hive AutoML

What is Hive AutoML?

Hive’s AutoML platform allows you to quickly train, evaluate, and deploy machine learning models for your own custom use cases. The process is simple — just select your desired model type, upload your datasets, and you’re ready to begin training! 

Since we announced the initial release of our AutoML platform, we’ve added support for Large Language Model training. Now you can build everything from classification models to chatbots, all in the same intuitive platform. To illustrate how easy the model-building process is, we’ll walk through it step-by-step with each type of model. We’ll also provide a link to the publicly available dataset we used as an example so that you can follow along.

Training an Image Classification Model

First we’re going to create an Image Classification model. This type of model is used to identify certain subjects, settings, and other visual attributes in both images and videos. For this example, we’ll be using a snacks dataset to identify 20 different kinds of food (strawberries, apples, hot dogs, cupcakes, etc.). To follow along with this walkthrough, first download the images from this dataset, which are sorted into separate files for each label.

Formatting the Datasets

After downloading the image data, we’ll need to put this data in the correct format for our AutoML training. For Image Classification datasets, the platform requires a CSV file that contains one column for image URLs titled “image_url” and up to 20 other columns for the classification categories you wish to use. This requires creating publicly accessible links for each image in the dataset. For this example, all 20 of our food categories will be part of the same head — food type. To do this, we formatted our CSV as follows:

The snacks dataset in the correct format for our AutoML platform
The snacks dataset in the correct format for our AutoML platform

This particular dataset is within the size limitations for Image Classification datasets. When uploading your own dataset, it is crucial that you ensure it meets all of the sizing requirements and other specifications or the dataset upload will fail. These requirements can be found in our AutoML documentation.

Both test and validation datasets are provided as part of the snacks dataset. When using your own datasets, you can choose to upload a test dataset or to split off a random section of your training data to use instead. If you choose the latter, you will be able to select what percentage of that data you want you use as test data as you create your training.

Uploading the Datasets

Before we start building the model, we first need to upload both our training and test datasets to the “Datasets” section of our AutoML platform. This part of our platform validates each dataset before it can be used for training as well as stores all datasets to be easily accessed for future models. We’ll upload both the training and test datasets separately, naming them Snacks (Train) and Snacks (Test) respectively.

Creating a Training

To start building your model, we’ll head to our AutoML platform and select the “Create New Model” button. We’ll then be brought to a project setup page where we will be prompted to enter a project name and description. For Model Type, we’ll select “Image Classification.” On the right side of the screen, we can add our training dataset by selecting from our dataset library. We’ll select the datasets called Snacks (Train) and Snacks (Test) that we just uploaded.

The “Create New Model” page
The “Create New Model” page

And just like that, we’re ready to start training our model! To begin the training process, we’ll click the “Start Training Model” button. The model’s status will then shift to “Queued” and then “In Progress” while we train the model. This will likely take several minutes. When training is complete, the status will display as “Completed.”

Evaluating the Model

After model training is complete, the page for that project will show various performance metrics so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.

The model’s project page after training has completed
The model’s project page after training has completed

Below that, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that two images of cupcakes were incorrectly classified as cookies — an understandable mistake as the two are both decorated desserts.

The confusion matrix for our snacks model
The confusion matrix for our snacks model

These detailed metrics can help us to know what categories to target if we want to train a better version of the model. If you would like to retrain your model, you can also click the “Update Model” to begin the training process again.

Deploying the Model

Even after the first time training this model, we’re pretty happy with how it turned out. We’re ready to deploy the model and start using it. To deploy, select the project and click the “Create Deployment” button in the top right corner. The project’s status will shift to “Deploying.” The deployment may take a few minutes.

Submitting Tasks via API

After the deployment is complete, we’re ready to start submitting tasks via API as we would any pre-trained Hive model. We can click on the name of any individual deployment to open the project on Hive Data, where we can upload tasks, view tasks, and access our API key. There is also a button to “Undeploy” the project, if we wish to deactivate it at any point. Undeploying a model is not permanent — we can redeploy the project if we later choose to.

To see a video of the entire training and deployment process for an Image Classification model, head over to our Youtube channel.

Training a Text Classification Model

We’ll now walk through that same training process in order to build a Text Classification model, but with a few small differences. Text classification models can be used to sort and tag text content by topic, tone, and more. For this example, we’ll use the Twitter Sentiment Analysis dataset posted by user carblacac on Hugging Face. This dataset consists of a series of short text posts originally published to Twitter and whether they have a negative (0) or positive (1) overall sentiment. To follow along with this walkthrough, you can download the dataset here.

Formatting the Datasets

For Text Classification datasets, our AutoML platform requires a CSV with the text data in a column titled “text_data” and up to 20 other columns that each represent classification categories, also called model heads. Using the Twitter Sentiment Analysis dataset, we only need to rename the columns like so:

Our Twitter Sentiment Analysis data formatted correctly for our AutoML platform
Our Twitter Sentiment Analysis data formatted correctly for our AutoML platform

The data consists of two sets, a training set with 150k examples and a test set with 62k examples. Before we upload our dataset, however, we must ensure that it fits our Text Classification dataset requirements. In the case of the training set, it does not fit those requirements — our AutoML platform only accepts CSV files that have 100,000 rows or less and this one has 150,000. In order to use this dataset, we’ll have to remove some examples from the set. In order to keep the number of examples for each class relatively equal, we removed 25,000 negative (0) examples and 25,000 positive (1) ones.

Uploading the Datasets

After fixing the size issue, we’re ready to upload our datasets. As is the case with all model types, we must first upload any datasets we are going to use before we create our training.

Creating a Training

After both the training and test datasets have been validated, we’re ready to start building your model. On our AutoML platform, we’ll click the “Create New Model” button and enter a project name and description. For our model type, this time we’ll select “Text Classification.” Finally, we’ll add our training and test datasets that we just uploaded.

We’re then ready to start training! This aspect of the training process is identical to the one shown above for an Image Classification model. Just click the “Start Training Model” button on the bottom right corner of the screen. When training is complete, the status will display as “Completed.”

Evaluating the Model

Just like in our Image Classification example, the project page will show various performance metrics after training is complete so that we can evaluate our model. At the top of the page we can select the head and, if desired, the class that we’d like to evaluate. We can also use the slide to control the confidence threshold. Once selected, you will see the precision, recall, and balanced accuracy.

The project page for our Twitter Sentiment Analysis model after it has completed training
The project page for our Twitter Sentiment Analysis model after it has completed training

Below the precision, recall, and balanced accuracy, you can view the precision/recall curve (P/R curve) as well as a confusion matrix that shows how many predictions were correct and incorrect per class. This gives us a more detailed understanding of what the model misclassified. For example, we can see here that while there were a fair amount of mistakes for each class, there were more cases in which a positive example was mistaken for a negative than the other way around. 

While the results of this training are not as good as our Image Classification example, this is somewhat expected — sentiment analysis is a more complex and difficult classification task. While this model could definitely be improved by retraining with slightly different data, we’ll demonstrate how to deploy it. To retrain your model, however, all you need to do is click the “Update Model” button and begin the training process again.

Deploying the Model

Deploying your model is the exact same process as described above in the Image Classification example. After the deployment is complete, you’ll be able to view the deployment on Hive Data and access the API keys needed in order to begin using the model. 

To see a video of the entire training and deployment process for a Text Classification model, head over to our Youtube channel.

Training a Large Language Model

Finally, we’ll walk through the training process for a Large Language Model (LLM). This process is slightly different from the training process for our classification model types, both in terms of dataset formatting and model evaluation.
Our AutoML platform supports two different types of LLMs: Text and Chat. Text models are geared towards generating passages of writing or lines of code, whereas chat models are built for interactions with the user, often in the format of asking questions and receiving concise, factual answers. For this example, we’ll be using the Viggo dataset uploaded by GEM to Hugging Face. To follow along with us as we build the model, you can download the training and test sets here.

Formatting the Datasets

This dataset supports the task of summarizing and restructuring text into a very specific syntax. All data is within the video game domain, and all prompts take the form of either questions or statements about various games. The goal of the model is to take these prompts, extract the main idea behind them, and reformat them. For example, the prompt “Guitar Hero: Smash Hits launched in 2009 but plays like a game from 1989, it’s just not good” becomes “give_opinion(name[Guitar Hero: Smash Hits], release_year[2009], rating[poor]).”

First, we’ll check to make sure this dataset is valid per our guidelines for AutoML datasets. The size is well under the limit of 50,000 rows with only around 5,000. All that needs to be done to make sure that the formatting is correct is make sure that the prompt is in a column titled “prompt” and the expected completion is in another column titled “completion.” All other columns can be removed. From this dataset, we will use the column “target” as “prompt” and the column “meaning_representation” as “completion.” The final CSV is as shown below:

The Viggo dataset ready to upload to our AutoML platform
The Viggo dataset ready to upload to our AutoML platform

Uploading the Datasets

Now let’s upload our datasets. We’ll be using both the training and test datasets from the Viggo dataset as provided here. After both datasets have been validated, we’re ready to train the model.

Creating a Training

We’ll head back to our Models page and select “Create New Model”. This time, the project type should be “Language Generative – Text”. We will then choose our training and test datasets from a list of ones that we’ve already uploaded to the platform. Then we’ll start the training!

Evaluating the Model

For Large Language Models, the metrics page looks a little different than it does for our classification models.

The project page for the Viggo model after it has completed training

The loss measures how closely the model’s response matches the response from the test data, where 0 represents a perfect prediction, and a higher loss signifies that the prediction is increasingly far from the actual response sequence. If the response has 10 tokens, we let the model predict each of the 10 tokens given all previous tokens are the same and display the final numerical loss value.

You can also evaluate your model by interacting with it in what we call the playground. Here you can submit prompts directly to your model and view its response, allowing model evaluation through experimentation. This will be available for 15 days after model training is complete, and has a limit of 500 requests. If either the time or request limit is reached, you can instead choose to deploy the model and continue to use the playground feature with unlimited uses which will be charged to the organization’s billing account.

For our Viggo model, all metrics are looking pretty good. We entered a few prompts into the playground to further test it, and the results showed no issues.

An example query and response from the playground feature

Deploying the Model

The process to deploy a Large Language Model is the same as it is for our classification models. Just click “Create Deployment” and you’ll be ready to submit API requests in just a few short minutes.

To see a video of the entire training and deployment process for an LLM, head over to our Youtube channel.

Final Thoughts

We hope this in-depth walkthrough of how to build different types of machine learning models with our AutoML platform was helpful. Keep an eye out for more AutoML tutorials in the coming weeks, such as a detailed guide to Retrieval Augmented Generation (RAG), data stream management systems (DSMS), and other exciting features we support.

If you have any further questions or run into any issues as you build your custom-made AI models, please don’t hesitate to reach out to us at support@thehive.ai and we will be happy to help. To inquire about testing out our AutoML platform, please contact sales@thehive.ai.

Dataset Sources

All datasets that are linked to as examples in this post are publicly available for a wide range of uses, including commercial use. The snacks dataset and viggo dataset are both licensed under a Creative Commons Attribution Share-Alike 4.0 (CC BY-SA 4.0) license. They can be found on Hugging Face here and here. The Twitter Sentiment Analysis dataset is licensed under the Apache License, Version 2.0. It is available on Hugging Face here. None of these datasets may be used except in compliance with their respective license agreements.