Predictive experiments¶
The following sections help to understand building machine learning experiments in Workbench:
Topic | Describes... |
---|---|
Create experiments | |
Basic experiment setup | Configure the basic settings for starting a predictive experiment. |
Advanced experiment setup | Use the Advanced settings tab to fine-tune experiment setup. |
Manage models | |
Manage the Leaderboard | Navigate and filter the Leaderboard; create feature lists. |
Compare models | Compare up to three models of the same type from any number of experiments within a single Use Case. |
Add/retrain models | Retrain existing models and add models from the blueprint repository. |
Edit blueprints | Build custom blueprints using built-in tasks and custom Python/R code. |
Explore model insights | |
Blueprint | View a graphical representation of data preprocessing and parameter settings. |
Coefficients | View a visual indicator of the relative effects of the 30 most important variables. |
Compliance documentation | Generate individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. |
Confusion matrix | Compare actual with predicted values in multiclass classification problems to identify class mislabeling. |
Feature Effects | View how changes to the value of each feature change model predictions |
Feature Impact | Understand which features are driving model decisions. |
Individual Prediction Explanations | Estimate how much each feature contributes to a given prediction, with values based on difference from the average. |
Lift Chart | Depict how well a model segments the target population and how capable it is of predicting the target. |
Model Iterations | Compare trained iterations in incremental learning experiments. |
Residuals | View scatter plots and a histogram for understanding model predictive performance and validity. |
ROC Curve | Access tools for exploring classification, performance, and statistics related to a model. |
Word Cloud | Visualize how text features influence model predictions. |
Miscellaneous | |
SHAP reference | See details of SHapley Additive exPlanations, the coalitional game theory framework. |
Note
An experiment can only be a part of a single Use Case. The reason for this is because a Use Case is intended to represent a specific business problem and experiments within the Use Case are typically directed at solving that problem. If an experiment is relevant for more than one Use Case, consider consolidating the two Use Cases.
Updated June 25, 2024
Was this page helpful?
Great! Let us know what you found helpful.
What can we do to improve the content?
Thanks for your feedback!