Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Model insights

When you select a model, DataRobot makes available a large selection of insights, grouped by purpose, appropriate for that model.

Model Leaderboard

The model Leaderboard is a list of models ranked by the chosen performance metric, with the best models at the top of the list. It provides a variety of insight tabs, available based on user permissions and applicability. Hover over an inactive division to view a dropdown of member tabs.

Note

Tabs are visible only if they are applicable to the project type. For example, time series-related tabs (e.g., Accuracy Over Time) only display for time series projects. Tabs that are applicable to a project but not a particular model type display as grayed out (for example, blender models, due to the nature of their construction, have fewer tab functions available).

The pages within this section provide information on using and interpreting the insights available from the Leaderboard (Models tab). See the Leaderboard reference for information on the badges and components of the Leaderboard as well as functions such as tagging, searching, and exporting data.

Leaderboard tabs

Tab name Description
Evaluate: Key plots and statistics for judging model effectiveness
Accuracy Over Space Provides a spatial residual mapping within an individual model.
Accuracy over Time Visualizes how predictions change over time.
Advanced Tuning Allows you to manually set model parameters, overriding the DataRobot selections.
Anomaly Assessment Plots data for the selected backtest and provides SHAP explanations for up to 500 anomalous points.
Anomaly over Time Plots how anomalies occur across the timeline of your data.
Confusion Matrix Compares actual data values with predicted data values in multiclass projects. For binary classification projects, use the confusion matrix on the ROC Curve tab.
Feature Fit Removed. See Feature Effects.
Forecasting Accuracy Provides a visual indicator of how well a model predicts at each forecast distance in the project’s forecast window.
Forecast vs Actual Compares how different predictions behave at different forecast points to different times in the future.
Lift Chart Depicts how well a model segments the target population and how capable it is of predicting the target.
Residuals Clearly visualizes the predictive performance and validity of a regression model.
ROC Curve Explores classification, performance, and statistics related to a selected model at any point on the probability scale.
Series Insights Provides series-specific information.
Stability Provides an at-a-glance summary of how well a model performs on different backtests.
Training Dashboard Provides an understanding about training activity, per iteration, for Keras-based models.
Understand: Explains what drives a model’s predictions
Feature Effects Visualizes the effect of changes in the value of each feature on the model’s predictions.
Feature Impact Provides a high-level visualization that identifies which features are most strongly driving model decisions.
Cluster Insights Captures latent features in your data, surfacing and communicating actionable insights and identifying segments in your data for further modeling.
Prediction Explanations Illustrates what drives predictions on a row-by-row basis using XEMP or SHAP methodology.
Word Cloud Displays the most relevant words and short phrases in word cloud format.
Describe: Model building information and feature details
Blueprint Provides a graphical representation of the data preprocessing and parameter settings via blueprint.
Coefficients Provides, for select models, a visual representation of the most important variables and a coefficient export capability.
Constraints Forces certain XGBoost models to learn only monotonic (always increasing or always decreasing) relationships between specific features and the target.
Data Quality Handling Report Provides transformation and imputation information for blueprints.
Eureqa Models Provides access to model blueprints for Eureqa generalized additive models (GAM), regression models, and classification models.
Log Lists operation status results.
Model Info Displays model information.
Rating Table Provides access to an export of the model’s complete, validated parameters.
Predict: Access to prediction options
Deploy Creates a deployment and makes predictions or generates a model package.
Downloads Provides export of a model binary file, validated Java Scoring Code for a model, or charts.
Make Predictions Makes in-app predictions.
Compliance: Compiles model documentation for regulatory validation
Compliance Documentation Generates individualized model documentation.
Template Builder Allows you to create, edit, and share custom documentation templates.
Comments: Adds comments to a modeling project
Comments Adds comments to items in the AI Catalog.
Bias and Fairness: Tests models for bias
Per-Class Bias Identifies if a model is biased, and if so, how much and who it's biased towards or against.
Cross-Class Data Disparity Depicts why a model is biased, and where in the training data it learned that bias from.
Cross-Class Accuracy Measures the model's accuracy for each class segment of the protected feature.
Insights and more: Graphical representations of model details
Activation Maps Visualizes areas of images that a model is using when making predictions.
Anomaly Detection Lists the most anomalous rows (those with the highest scores) from the Training data.
Category Cloud Visualizes relevancy of a collection of categories from summarized categorical features.
Hotspots Indicates predictive performance.
Image Embeddings Displays a projection of images onto a two-dimensional space defined by similarity.
Text Mining Visualizes relevancy of words and short phrases.
Tree-based Variable Importance Ranks the most important variables in a model.
Variable Effects Illustrates the magnitude and direction of a feature's effect on a model's predictions.
Word Cloud Visualizes variable keyword relevancy.
Learning Curves Helps to determine whether it is worthwhile to increase dataset size.
Speed vs Accuracy Illustrates the tradeoff between runtime and predictive accuracy.
Model Comparison Compares selected models by varying criteria.
Bias vs Accuracy Illustrates the tradeoff between predictive accuracy and fairness.

Updated June 22, 2023