Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Predictive experiments

Experiments are the individual "projects" within a Use Case. They allow you to vary data, targets, and modeling settings to find the optimal models to solve your business problem. Within each experiment, you have access to its Leaderboard and model insights, as well as information that summarizes the data and experiment setup.

There are two types of AI experiments available in Workbench:

  • Predictive modeling, described on these pages, makes row-by-row predictions based on your data.
  • Time-aware modeling, described here, models using time-relevant data to make row-by-row predictions, time series forecasts, or current value predictions "nowcasts".

For each, you can build models using either supervised or unsupervised learning.

  • Supervised learning uses the other features of your dataset to make predictions.
  • Unsupervised learning uses unlabeled data to surface insights about patterns in your data, answering questions like "Are there anomalies in my data?" and "Are there natural clusters?"

The following sections help to understand building predictive machine learning experiments in Workbench:

Topic Describes
Create experiments
Supervised experiment setup Specify a target to build models using the other features of your dataset to make predictions.
Unsupervised experiment setup Use unsupervised learning to build models that surface insights about patterns in your data.
Advanced experiment setup Use the Advanced settings tab to fine-tune experiment setup.
Manage models
Manage the Leaderboard Navigate and filter the Leaderboard; create feature lists.
Compare models Compare up to three models of the same type from any number of experiments within a single Use Case.
Add/retrain models Retrain existing models and add models from the blueprint repository.
Edit blueprints Build custom blueprints using built-in tasks and custom Python/R code.
Explore model insights
Evaluate models View model insights to help evaluate models.
Reference
SHAP reference See details of SHapley Additive exPlanations, the coalitional game theory framework.
Troubleshooting the Worker Queue Describes how DataRobot uses modeling workers and how to troubleshoot problems.

Note

An experiment can only be a part of a single Use Case. The reason for this is because a Use Case is intended to represent a specific business problem and experiments within the Use Case are typically directed at solving that problem. If an experiment is relevant for more than one Use Case, consider consolidating the two Use Cases.


Updated August 28, 2024