Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Model insights

Visual AI provides several tools to help visually assess, understand, and evaluate model performance:

  • Image embeddings allow you to view projections of images in two dimensions to see visual similarity between a subset of images and help identify outliers.
  • Activation maps highlight regions of an image according to its importance to a model's prediction.
  • Image Prediction Explanations illustrate what drives predictions, providing a quantitative indicator of the effect variables have on the predictions.
  • The Neural Network Visualizer provides a visual breakdown of each layer in the model's neural network.

Image Embeddings and Activation Maps are also available from the Insights tab, allowing you to more easily compare models, for example, if you have applied tuning.

Additionally, the standard DataRobot insights (Confusion Matrix (for multiclass classificaiton), Feature Impact, and Lift Chart, for example) are all available.

Image Embeddings

From the Understand > Image Embeddings tab, you can view up to 100 images from the validation set projected onto a two-dimensional plane (using a technique that preserves similarity among images). This visualization answers the questions: What does the featurizer consider to be similar? Does this match human intuition? Is the featurizer missing something obvious?

To work with Image Embeddings:

  • Filter the display by class (1): DataRobot displays all classes by default; use the dropdown to select a specific class.

  • Zoom and reset the display size (2): Use the controls to zoom in (and out) on the display, or double-click to zoom. Additionally, you can click-and-drag to move areas of the display into focus.

  • Identify actual class (3): Hover on an image to view its associated class. Use this to compare images to see whether DataRobot's grouping images as you would expect.

See the reference material for more information.

Activation Maps

With Activation Maps, you can see which image areas the model is using when making predictions—which parts of the images are driving the algorithm prediction decision. See the reference material for more information.

An activation map can indicate whether your model is looking at the foreground or background of an image or whether it is focusing on the right areas. For example, is it looking only at “healthy” areas of a plant when there is disease and because it does not use the whole leaf, classifying it as "no disease"? Is there a problem with overfitting or target leakage? These maps help to determine whether the model would be more effective if it were tuned.

To use the maps, select Understand > Activation Maps for a model. DataRobot previews up to 100 sample images from the project's validation set:

Filters (1)

Filters allow you to narrow the display based on the predicted and the actual class values. The initial display shows the full sample (i.e., both filters are set to all. You can set the display to filter by a specific classes, limiting the display). Some examples:

"Predicted" filter "Actual" filter Display results
All All All (up to 100) samples from the validation set
Tomato Leaf Mold All All samples in which the predicted class was Tomato Leaf Mold
Tomato Leaf Mold Tomato Leaf Mold All samples in which both the predicted and actual class were Tomato Leaf Mold
Tomato Leaf Mold Potato Blight Any sample in which DataRobot predicted Tomato Leaf Mold but the actual class was potato blight

Hover over an image to see the reported predicted and actual classes for the image:

Color overlay (2)

DataRobot provides two different views of the activation maps—black and white (which shows some transparency of original image colors) and full color. Select the option that provides the clearest contrast. For example, for black and white datasets, the alternative color overlay may make activation areas more obvious (instead of using a black-to-transparent scale). Toggle Show color overlay to compare.

Activation scale (3)

The high-to-low activation scale indicates how much of a region in an image is influencing the prediction. Areas that are higher on the scale have a higher predictive influence—the model used something that was there (or not there, but should have been) to make the prediction. Some examples might include the presence or absence of yellow discoloration on a leaf, a shadow under a leaf, or an edge of a leaf that curls in a certain way.

Another way to think of scale is that it reflects how much the model "is excited by" a particular region of the image. It’s a kind of prediction explanation—why did the model predict what it did? The map shows that the reason is because the algorithm saw x in this region, which activated the filters sensitive to visual information like x.

Neural Network Visualizer

The Describe > Neural Network Visualizer tab illustrates the order of, and connections between, layers for each layer in a model's neural network. It helps to understand if the network layers are connected in the expected order in that it describes the order of connections and the input and outputs for each layer in the network.

With the visualizer you can visualize the structure by:

  • Clicking and dragging left and right to see all layers.
  • Clicking to expand or collapse a grouped layer, displaying/hiding all layers in the group.

  • Clicking Display all layers to load the blueprint with all layers expanded.

  • For blueprints that contain multiple neural networks, a Select graph dropdown becomes available, allowing you to display the associated visualization for that neural network.


Updated October 26, 2021
Back to top