Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Visual AI tuning guide

In this guide, you will step through several recommended methods for maximizing Visual AI classification accuracy with a boat dataset containing nine classes and approximately 1,500 images. You can get the dataset here.

Start the project with the target of class. When it builds, change the optimization metric from LogLoss to show Accuracy. You'll see under the cross-validation score that the top model achieved 83.68% accuracy.

Use the steps below to improve the results

1. Run with Comprehensive mode

The first modeling run, using Quick mode, generated results by exploring a limited set of available blueprints. There are, however, many more available if you run in Comprehensive mode. Click Configure modeling settings option in the right pane, and select Comprehensive modeling mode, to re-run the modeling process and build additional models while prioritizing accuracy.

This results in a model with a much higher accuracy of 91.45%.

2. Explore other image featurizers

Once images are turned to numbers (“featurized”) as a task in the model's blueprint, they can be passed to a modeling algorithm and combined with other features (numeric, categorical, text, etc.). The featurizer takes the binary content of image files as input and produces a feature vector that represents key characteristics of that image at different levels of complexity. These feature vectors are then used downstream as input to a modeler. DataRobot provides several featurizers based on pre-trained neural network architectures.

To explore improvements with other image featurizers, select the top model on the Leaderboard and view its blueprint, which shows the featurizer used.

From the Advanced Tuning tab, scroll to the current network to bring up the menu of options.

Try different network hyperparameters (scroll to the bottom and select Begin Tuning after each change). After tuning the top Comprehensive mode model with each available image featurizer, you can further explore variations of those featurizers in the top-performing models.

3. Feature Granularity

Featurizers are deep convolutional neural networks made up of sequential layers, each layer aggregating information from previous layers. The first layers capture low-level patterns made of a few pixels: points, edges, and corners. The next layers capture shapes and textures. The final layers capture objects. You can select the level of features you want to extract from the neural network model, tuning and optimizing results (although the more layers enabled, the longer the run time).

Toggles for feature granularity options (highest, high, medium, low) are found below the network section in the Advanced Tuning menu.

Any combination of these can be used, and the context of your problem/data can direct which features might provide the most useful information.

4. Image Augmentation

Following featurizer tuning, you can explore changes to your input data with image augmentation to improve model accuracy. By creating new images for training by randomly transforming existing images, you can build insightful projects with datasets that might otherwise be too small.

Image augmentation is available at project setup in the Image Augmentation advanced options or after modeling in Advanced Tuning.

Domain expertise can provide insight into which transformations could show the greatest impact. Otherwise, a good place to start is with rotation, then rotation + cutout, followed by other combinations.

5. Classifier hyperparameters

The training hyperparameters of the classifier, the component receiving the image feature encodings from the featurizer, are also exposed for tuning in the Advanced Tuning menu.

To set a new hyperparameter, enter a value in the Enter value field in one of the following ways:

  • Select one of the prepopulated values (clicking any value listed in orange enters it into the value field.)

  • Type a value into the field. Refer to the Acceptable Values field, which lists either constraints for numeric inputs or predefined allowed values for categorical inputs (“selects”). To enter a specific numeric, type a value or range meeting the criteria of Acceptable Values:

In the screenshot above, you can enter various values between 0.00001 and 1, for example:

  • 0.2 to select an individual value.
  • 0.2, 0.4, 0.6 to list values that fall within the range; use commas to separate a list.
  • 0.2-0.6 to specify the range and let DataRobot select intervals between the high and low values; use hyphen notation to specify a range.

Fine-tuning tips

  • For speed improvements, reduce Early Stopping Patience. The default is 5, try setting it to 2, as 5 sometimes can lead to 40+ epochs.

  • Change the loss function to focal_loss if the dataset is imbalanced, since focal_loss generalizes well for imbalanced datasets.

  • For faster convergence, change reduce_lr_patience to 1 (the default is 3).

  • Change model_name to efficientnet-b0 if you aiming better fine-tuner accuracy. The default is mobilenet-v3.

Set the search type by clicking Select search option, and selecting either:

  • Smart Search (default) performs a sophisticated pattern search (optimization) that emphasizes areas where the model is likely to do well and skips hyperparameter points that are less relevant to the model.

  • Brute Force evaluates each data point, which can be more time and resource intensive.

Recommended hyperparameters to search first vary by the classifier used. For example:

  • Keras model: batch size, learning rate, hidden layers, and initialization
  • XGBoost: number of variables, learning rate, number of trees, and subsample per tree
  • ENet: alpha and lambda

Bonus: Fine-tuned blueprints

Fine-tuned model blueprints may prove useful for datasets that greatly differ from ImageNet, the dataset leveraged for the pre-trained image featurizers. These blueprints are found in the model repository and can be trained from scratch (random weight initialization) or with the pre-trained weights. Many of the tuning steps described above also apply for these blueprints; however, keep in mind that fine-tuned blueprints requires extended training times. In addition, pre-trained blueprints will achieve better scores than fine-tuned blueprints for the majority of cases, and did so with our boats dataset.

Final results

Fine-tuning improved the accuracy score on the top Quick model (83.68%) to the top Comprehensive mode model (91.45%). Following the additional steps outlined here for maximizing model performance with the most effective settings within the platform resulted in a final accuracy of 92.92%.

Updated November 8, 2023