Evaluate with model insights¶
Model insights help to interpret, explain, and validate what drives a model’s predictions. Using these tools can help to assess what to do in your next experiment. Available insights are dependent on experiment type as well as experiment view:
To see a model's insights, click on the model in the left-pane Leaderboard. Note that different insights are available for time-aware experiments.
Available insights¶
Insight | Description | Problem type | Sliced insights? | Compare available? |
---|---|---|---|---|
Accuracy Over Space | Reveals spatial patterns in prediction errors and visualizes prediction errors across data partitions on a map visualization. | |||
Anomaly Over Space | Maps anomaly scores based on a dataset's location features. | |||
Blueprint | Provides a graphical representation of data preprocessing and parameter settings. | All | ||
Cluster Insights | Visualizes the groupings of data that result from modeling with learning type set to clustering. | Predictive clustering | ||
Coefficients | Provides a visual indicator of the relative effects of the 30 most important variables. | All; linear models only | ||
Compliance documentation | Generates individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | All | ||
Confusion matrix | Compares actual with predicted values in multiclass classification problems to identify class mislabeling. | Classification, time-aware | ||
Feature Effects | Conveys how changes to the value of each feature change model predictions | All | ✔ | |
Feature Impact | Shows which features are driving model decisions. | All | ✔ | ✔ |
Individual Prediction Explanations | Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | ✔ | |
Lift Chart | Depicts how well a model segments the target population and how capable it is of predicting the target. | All | ✔ | ✔ |
Model Iterations | Compares trained iterations in incremental learning experiments. | Binary classification, regression | ||
Residuals | Provides scatter plots and a histogram for understanding model predictive performance and validity. | Regression | ✔ | |
ROC Curve | Provides tools for exploring classification, performance, and statistics related to a model. | Binary classification | ✔ | ✔ |
SHAP Distributions: Per Feature | Displays, via a a violin plot, the distribution of SHAP values and feature values to aid in the analysis of how feature values influence predictions. | Binary classification, regression | ✔ | |
Word Cloud | Visualize how text features influence model predictions. | Binary classification, regression |
What's next?¶
After selecting a model, you can, from within the experiment:
- Compare models.
- Add models to experiments.
- Make predictions.
- Create No-Code AI Apps.
- Generate a compliance report.
Updated November 3, 2024
Was this page helpful?
Great! Let us know what you found helpful.
What can we do to improve the content?
Thanks for your feedback!