Evaluate with model insights¶
Model insights help to interpret, explain, and validate what drives a model’s predictions. Using these tools can help to assess what to do in your next experiment. Available insights are dependent on experiment type, but may include the insights listed in the table below.
Available insights¶
To see a model's insights, click on the model in the left-pane Leaderboard—the Model Overview opens. From here, all available experiment insights are available, grouped by purpose and answering:
- Explanations: What did the model learn?
- Performance: How good is the model?
- Details: How was the model built?
- Artifacts: What are the assets from the model?
Use search to filter insights by name and/or description. The results also mark which group the insight belongs to.
Note that different insights are available for predictive experiments.
Insight / tab | Description | Problem type | Sliced insights? |
---|---|---|---|
Accuracy Over Time Performance tab |
Visualizes how predictions change over time. | Time-aware | |
Anomaly Assessment Performance tab |
Plots data for the selected backtest and provides, below the visualization, SHAP explanations for up to 500 anomalous points. | Time series | |
Anomaly Over Time Performance tab |
Visualizes where anomalies occur across the timeline of your data. | Time-aware | |
Blueprint Details tab |
Provides a graphical representation of data preprocessing and parameter settings. | All | |
Coefficients Explanations tab |
Provides a visual indicator of the relative effects of the 30 most important variables. | All; linear models only | |
Compliance documentation | Generates individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | All | |
Downloads Artifacts tab |
Download model artifacts in a single ZIP file. | All | |
Eureqa Models Details tab |
Uses a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity. | All, no multiclass | |
Feature Effects Explanations tab |
Conveys how changes to the value of each feature change model predictions | All | ✔ |
Feature Impact Explanations tab |
Shows which features are driving model decisions. | All | ✔ |
Forecasting Accuracy Performance tab |
Depicts how well a model predicts at each forecast distance in the experiment's forecast window. | Time series | |
Forecast vs Actual Performance tab |
Predicts multiple values for each point in time (forecast distances). | Time series | |
Forecasting Accuracy Performance tab |
Provides a visual indicator of how well a model predicts at each forecast distance. | Time-aware | |
Individual Prediction Explanations (XEMP) | Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | All |
Lift Chart Performance tab |
Depicts how well a model segments the target population and how capable it is of predicting the target. | All | ✔ |
Log Details tab |
Lists operational status results for modeling tasks. | All | |
Model Info Details tab |
Provides general model and performance information. | All | |
Period Accuracy Performance tab |
Shows model performance over periods within the training dataset. | Time-aware | |
Metric Scores Performance tab |
Displays results for all supported metrics. | All | |
Related Assets Artifacts tab |
Lists all apps, deployments, and registered models associated with the model; launches no-code apps creation or model registration. | All | |
ROC Curve Performance tab |
Provides tools for exploring classification, performance, and statistics related to a model. | Classification | ✔ |
Series Insights Performance tab |
Provides series-specific information for multiseries experiments. | Time series | |
Stability Performance tab |
Provides a summary of how well a model performs on different backtests. | Time-aware |
Insight | Description | Problem type | Sliced insights? |
---|---|---|---|
Explanations | |||
Coefficients | Provides a visual indicator of the relative effects of the 30 most important variables. | All; linear models only | |
Feature Effects | Conveys how changes to the value of each feature change model predictions | All | ✔ |
Feature Impact | Shows which features are driving model decisions. | All | ✔ |
Forecasting Accuracy | Depicts how well a model predicts at each forecast distance in the experiment's forecast window. | Time series | |
Individual Prediction Explanations (XEMP) | Estimates how much each feature contributes to a given prediction, with values based on difference from the average. | Binary classification, regression | ✔ |
Performance | |||
Accuracy Over Time | Visualizes how predictions change over time. | Time-aware | |
Anomaly Assessment | Plots data for the selected backtest and provides, below the visualization, SHAP explanations for up to 500 anomalous points. | Time series | |
Anomaly Over Time | Visualizes where anomalies occur across the timeline of your data. | Time-aware | |
Forecast vs Actual | Predicts multiple values for each point in time (forecast distances). | Time series | |
Forecasting Accuracy | Provides a visual indicator of how well a model predicts at each forecast distance. | Time-aware | |
Lift Chart | Depicts how well a model segments the target population and how capable it is of predicting the target. | All | ✔ |
Metric Scores | Displays results for all supported metrics. | All | |
Period Accuracy | Shows model performance over periods within the training dataset. | Time-aware | |
ROC Curve | Provides tools for exploring classification, performance, and statistics related to a model. | Classification | ✔ |
Series Insights | Provides series-specific information for multiseries experiments. | Time series | |
Stability | Provides a summary of how well a model performs on different backtests. | Time-aware | |
Details | |||
Blueprint | Provides a graphical representation of data preprocessing and parameter settings. | All | |
Eureqa Models | Uses a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity. | All, no multiclass | |
Log | Lists operational status results for modeling tasks. | All | |
Model Info | Provides general model and performance information. | All | |
Artifacts | |||
Compliance documentation | Generates individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. | ||
Downloads | Download model artifacts in a single ZIP file. | All | |
Related Assets | Lists all apps, deployments, and registered models associated with the model; launches no-code apps creation or model registration. | All |
What's next?¶
After selecting a model, you can, from within the experiment: