Evaluate with model insights¶
Model insights help to interpret, explain, and validate what drives a model’s predictions. Using these tools can help to assess what to do in your next experiment. Available insights are dependent on experiment type as well as the experiment view (single versus comparison).
Available insights¶
To see a model's insights, click on the model in the left-pane Leaderboard—the Model Overview opens. From here, all available experiment insights are available, grouped by purpose and answering:
- Explanations: What did the model learn?
- Performance: How good is the model?
- Details: How was the model built?
- Artifacts: What are the assets from the model?
Use search to filter insights by name and/or description. The results also mark which group the insight belongs to.
Note that different insights are available for time-aware experiments.
Insight / tab | Description | Problem type | Sliced insights? | Compare available? |
---|---|---|---|---|
Accuracy Over Space Performance tab |
Reveals spatial patterns in prediction errors and visualizes prediction errors across data partitions on a map visualization. | Geospatial | ||
Activation Maps Explanations tab |
Highlights regions of an image according to its importance to a model's prediction. | Visual AI, time-aware predictive | ||
Anomaly Over Space Performance tab |
Maps anomaly scores based on a dataset's location features. | Geospatial | ||
Blueprint Details tab |
Provides a graphical representation of data preprocessing and parameter settings. | All | ||
Cluster Insights Explanations tab |
Visualizes the groupings of data that result from modeling with learning type set to clustering. | Predictive clustering | ||
Coefficients Explanations tab |
Provides a visual indicator of the relative effects of the 30 most important variables. | All; linear models only | ||
Compliance documentation | Generates individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | All | ||
Confusion matrix Performance tab |
Compares actual with predicted values in multiclass classification problems to identify class mislabeling. | Classification, time-aware | ||
Downloads Artifacts tab |
Download model artifacts in a single ZIP file. | All | ||
Eureqa Models Details tab |
Uses a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity. | All, no multiclass | ||
Feature Effects Explanations tab |
Conveys how changes to the value of each feature change model predictions | All | ✔ | |
Feature Impact Explanations tab |
Shows which features are driving model decisions. | All | ✔ | ✔ |
Image Embeddings Explanations tab |
Shows projections of images in two dimensions to see visual similarity between a subset of images and help identify outliers. | Visual AI, time-aware predictive | ||
Individual Prediction Explanations Explanations tab |
Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | ✔ | |
Lift Chart Performance tab |
Depicts how well a model segments the target population and how capable it is of predicting the target. | All | ✔ | ✔ |
Log Details tab |
Lists operational status results for modeling tasks. | All | ||
Metric Scores Performance tab |
Displays results for all supported metrics. | All | ||
Model Info Details tab |
Provides general model and performance information. | All | ||
Model Iterations Details tab |
Compares trained iterations in incremental learning experiments. | Binary classification, regression | ||
Multilabel: Per-Label Metrics Performance tab |
Summarizes performance across different label values of the prediction threshold. | Multilabel classification | ||
Neural Network Visualizer Details tab |
Provides a visual breakdown of each layer in the model's neural network. | Visual AI, time-aware predictive | ||
Related Assets Artifacts tab |
Lists all apps, deployments, and registered models associated with the model; launches no-code apps creation or model registration. | All | ||
Residuals Performance tab |
Provides scatter plots and a histogram for understanding model predictive performance and validity. | Regression | ✔ | |
ROC Curve Performance tab |
Provides tools for exploring classification, performance, and statistics related to a model. | Binary classification | ✔ | ✔ |
SHAP Distributions: Per Feature Explanations tab |
Displays, via a a violin plot, the distribution of SHAP values and feature values to aid in the analysis of how feature values influence predictions. | Binary classification, regression | ✔ | |
Word Cloud Explanations tab |
Visualize how text features influence model predictions. | Binary classification, regression |
Insight | Description | Problem type | Sliced insights? | Compare available? |
---|---|---|---|---|
Explanations | ||||
Activation Maps | Highlights regions of an image according to its importance to a model's prediction. | Visual AI, time-aware predictive | ||
Cluster Insights | Visualizes the groupings of data that result from modeling with learning type set to clustering. | Predictive clustering | ||
Coefficients | Provides a visual indicator of the relative effects of the 30 most important variables. | All; linear models only | ||
Feature Effects | Conveys how changes to the value of each feature change model predictions | All | ✔ | |
Feature Impact | Shows which features are driving model decisions. | All | ✔ | ✔ |
Image Embeddings | Shows projections of images in two dimensions to see visual similarity between a subset of images and help identify outliers. | Visual AI, time-aware predictive | ||
Individual Prediction Explanations | Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | ✔ | |
SHAP Distributions: Per Feature | Displays, via a a violin plot, the distribution of SHAP values and feature values to aid in the analysis of how feature values influence predictions. | Binary classification, regression | ✔ | |
Word Cloud | Visualize how text features influence model predictions. | Binary classification, regression | ||
Performance | ||||
Accuracy Over Space | Reveals spatial patterns in prediction errors and visualizes prediction errors across data partitions on a map visualization. | Geospatial | ||
Anomaly Over Space | Maps anomaly scores based on a dataset's location features. | Geospatial | ||
Confusion matrix | Compares actual with predicted values in multiclass classification problems to identify class mislabeling. | Classification, time-aware | ||
Lift Chart | Depicts how well a model segments the target population and how capable it is of predicting the target. | All | ✔ | ✔ |
Metric Scores | Displays results for all supported metrics. | All | ||
Multilabel: Per-Label Metrics | Summarizes performance across different label values of the prediction threshold. | Multilabel classification | ||
Residuals | Provides scatter plots and a histogram for understanding model predictive performance and validity. | Regression | ✔ | |
ROC Curve | Provides tools for exploring classification, performance, and statistics related to a model. | Binary classification | ✔ | ✔ |
Details | ||||
Blueprint | Provides a graphical representation of data preprocessing and parameter settings. | All | ||
Eureqa Models | Uses a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity. | All, no multiclass | ||
Log | Lists operational status results for modeling tasks. | All | ||
Model Info | Provides general model and performance information. | All | ||
Model Iterations | Compares trained iterations in incremental learning experiments. | Binary classification, regression | ||
Neural Network Visualizer | Provides a visual breakdown of each layer in the model's neural network. | Visual AI, time-aware predictive | ||
Artifacts | ||||
Compliance documentation | Generates individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | All | ||
Downloads | Download model artifacts in a single ZIP file. | All | ||
Related Assets | Lists all apps, deployments, and registered models associated with the model; launches no-code apps creation or model registration. | All |
What's next?¶
After selecting a model, you can, from within the experiment:
- Compare models.
- Add models to experiments.
- Make predictions.
- Create No-Code AI Apps.
- Generate a compliance report.