Evaluate with model insights¶
Model insights help to interpret, explain, and validate what drives a model’s predictions. Using these tools can help to assess what to do in your next experiment. Available insights are dependent on experiment type as well as the experiment view (single versus comparison). Click on a model from the Model Leaderboard to access insights.
Available insights¶
To see a model's insights, click on the model in the left-pane Leaderboard—the Model Overview opens. From here, all available experiment insights are available, grouped by purpose and answering:
- Explanations: What did the model learn?
- Performance: How good is the model?
- Details: How was the model built?
- Artifacts: What are the assets from the model?
Use search to filter insights by name and/or description. The results also mark which group the insight belongs to.

Note that different insights are available for predictive and time-aware experiments, as noted in the table.
Insight / tab | Description | Problem type | Sliced insights? | Compare available? |
---|---|---|---|---|
Accuracy Over Space Performance tab |
Reveals spatial patterns in prediction errors and visualizes prediction errors across data partitions on a map visualization. | Geospatial | ||
Accuracy Over Time Performance tab |
Visualizes how predictions change over time. | Time-aware predictive | ||
Attention Maps Explanations tab |
Highlights regions of an image according to its importance to a model's prediction. | Visual AI, time-aware predictive | ||
Anomaly Assessment Performance tab |
Plots data for the selected backtest and provides, below the visualization, SHAP explanations for up to 500 anomalous points. | Time series | ||
Anomaly Over Space Performance tab |
Maps anomaly scores based on a dataset's location features. | Geospatial | ||
Anomaly Over Time Performance tab |
Visualizes where anomalies occur across the timeline of your data. | Time-aware predictive | ||
Blueprint Details tab |
Provides a graphical representation of data preprocessing and parameter settings. | All | ||
Cluster Insights Explanations tab |
Visualizes the groupings of data that result from modeling with learning type set to clustering. | Predictive clustering | ||
Coefficients Explanations tab |
Provides a visual indicator of the relative effects of the 30 most important variables. | All; linear models only | ||
Compliance documentation | Generates individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | All | ||
Confusion matrix Performance tab |
Compares actual with predicted values in multiclass classification problems to identify class mislabeling. | Classification, time-aware | ||
Downloads Artifacts tab |
Download model artifacts in a single ZIP file. | All | ||
Eureqa Models Details tab |
Uses a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity. | All, no multiclass | ||
Feature Effects Explanations tab |
Conveys how changes to the value of each feature change model predictions | All | ✔ | |
Feature Impact Explanations tab |
Shows which features are driving model decisions. | All | ✔ | ✔ |
Forecasting Accuracy Performance tab |
Depicts how well a model predicts at each forecast distance in the experiment's forecast window. | Time series, Time-aware predictive | ||
Forecast vs Actual Performance tab |
Predicts multiple values for each point in time (forecast distances). | Time series | ||
Image Embeddings Explanations tab |
Shows projections of images in two dimensions to see visual similarity between a subset of images and help identify outliers. | Visual AI, time-aware predictive | ||
Individual Prediction Explanations Explanations tab |
Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | ✔ | |
Lift Chart Performance tab |
Depicts how well a model segments the target population and how capable it is of predicting the target. | All | ✔ | ✔ |
Log Details tab |
Lists operational status results for modeling tasks. | All | ||
Metric Scores Performance tab |
Displays results for all supported metrics. | All | ||
Model Info Details tab |
Provides general model and performance information. | All | ||
Model Iterations Details tab |
Compares trained iterations in incremental learning experiments. | Binary classification, regression | ||
Multilabel: Per-Label Metrics Performance tab |
Summarizes performance across different label values of the prediction threshold. | Multilabel classification | ||
Neural Network Visualizer Details tab |
Provides a visual breakdown of each layer in the model's neural network. | Visual AI, time-aware predictive | ||
Related Assets Artifacts tab |
Lists all apps, deployments, and registered models associated with the model; launches no-code apps creation or model registration. | All | ||
Period Accuracy Performance tab |
Shows model performance over periods within the training dataset. | Time-aware predictive | ||
Related Assets Artifacts tab |
Lists all apps, deployments, and registered models associated with the model; launches no-code apps creation or model registration. | All | ||
Residuals Performance tab |
Provides scatter plots and a histogram for understanding model predictive performance and validity. | Regression | ✔ | |
ROC Curve Performance tab |
Provides tools for exploring classification, performance, and statistics related to a model. | Binary classification | ✔ | ✔ |
Series Insights Performance tab |
Provides series-specific information for multiseries experiments. | Time series | ||
SHAP Distributions: Per Feature Explanations tab |
Displays, via a a violin plot, the distribution of SHAP values and feature values to aid in the analysis of how feature values influence predictions. | Binary classification, regression | ✔ | |
Stability Performance tab |
Provides a summary of how well a model performs on different backtests. | Time-aware predictive | ||
Word Cloud Explanations tab |
Visualize how text features influence model predictions. | Binary classification, regression |
Insight | Description | Problem type | Sliced insights? | Compare available? |
---|---|---|---|---|
Explanations | ||||
Attention Maps | Highlights regions of an image according to its importance to a model's prediction. | Visual AI, time-aware predictive | ||
Cluster Insights | Visualizes the groupings of data that result from modeling with learning type set to clustering. | Predictive clustering | ||
Coefficients | Provides a visual indicator of the relative effects of the 30 most important variables. | All; linear models only | ||
Feature Effects | Conveys how changes to the value of each feature change model predictions | All | ✔ | |
Feature Impact | Shows which features are driving model decisions. | All | ✔ | ✔ |
Forecasting Accuracy | Depicts how well a model predicts at each forecast distance in the experiment's forecast window. | Time series | ||
Image Embeddings | Shows projections of images in two dimensions to see visual similarity between a subset of images and help identify outliers. | Visual AI, time-aware predictive | ||
Individual Prediction Explanations | Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | ✔ | |
Individual Prediction Explanations (XEMP) | Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | ✔ | |
SHAP Distributions: Per Feature | Displays, via a a violin plot, the distribution of SHAP values and feature values to aid in the analysis of how feature values influence predictions. | Binary classification, regression | ✔ | |
Word Cloud | Visualize how text features influence model predictions. | Binary classification, regression | ||
Performance | ||||
Accuracy Over Space | Reveals spatial patterns in prediction errors and visualizes prediction errors across data partitions on a map visualization. | Geospatial | ||
Accuracy Over Time | Visualizes how predictions change over time. | Time-aware predictive | ||
Anomaly Assessment | Plots data for the selected backtest and provides, below the visualization, SHAP explanations for up to 500 anomalous points. | Time series | ||
Anomaly Over Space | Maps anomaly scores based on a dataset's location features. | Geospatial | ||
Anomaly Over Time | Visualizes where anomalies occur across the timeline of your data. | Time-aware predictive | ||
Confusion matrix | Compares actual with predicted values in multiclass classification problems to identify class mislabeling. | Classification, time-aware | ||
Forecast vs Actual | Predicts multiple values for each point in time (forecast distances). | Time series | ||
Forecasting Accuracy | Provides a visual indicator of how well a model predicts at each forecast distance. | Time-aware predictive | ||
Lift Chart | Depicts how well a model segments the target population and how capable it is of predicting the target. | All | ✔ | ✔ |
Metric Scores | Displays results for all supported metrics. | All | ||
Multilabel: Per-Label Metrics | Summarizes performance across different label values of the prediction threshold. | Multilabel classification | ||
Period Accuracy | Shows model performance over periods within the training dataset. | Time-aware predictive | ||
Residuals | Provides scatter plots and a histogram for understanding model predictive performance and validity. | Regression | ✔ | |
ROC Curve | Provides tools for exploring classification, performance, and statistics related to a model. | Binary classification | ✔ | ✔ |
Series Insights | Provides series-specific information for multiseries experiments. | Time series | ||
Stability | Provides a summary of how well a model performs on different backtests. | Time-aware predictive | ||
Details | ||||
Blueprint | Provides a graphical representation of data preprocessing and parameter settings. | All | ||
Eureqa Models | Uses a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity. | All, no multiclass | ||
Log | Lists operational status results for modeling tasks. | All | ||
Model Info | Provides general model and performance information. | All | ||
Model Iterations | Compares trained iterations in incremental learning experiments. | Binary classification, regression | ||
Neural Network Visualizer | Provides a visual breakdown of each layer in the model's neural network. | Visual AI, time-aware predictive | ||
Artifacts | ||||
Compliance documentation | Generates individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | All | ||
Downloads | Download model artifacts in a single ZIP file. | All | ||
Related Assets | Lists all apps, deployments, and registered models associated with the model; launches no-code apps creation or model registration. | All |