Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Create experiments

There are two AI experimentation "types" available in Workbench:

  • Predictive modeling, described on this page, makes row-by-row predictions based on your data.

  • Time-aware modeling, described here{ target=blank }, models using _time-relevant data to make row-by-row predictions, time series forecasts, or current value predictions "nowcasts".

Experiments are the individual "projects" within a Use Case. They allow you to vary data, targets, and modeling settings to find the optimal models to solve your business problem. Within each experiment, you have access to its Leaderboard and model insights, as well as experiment summary information.

See the associated FAQ for important additional information.

Create basic

Follow the steps below to create a new experiment from within a Use Case.

Note

You can also start modeling directly from a dataset by clicking the Start modeling button. The Set up new experiment page opens. From there, the instructions follow the flow described below.

Create a feature list

Public preview

Support for feature lists in Workbench is on by default.

Feature flag: Enable Feature Lists in Workbench Preview

Before modeling, you can create a custom feature list from the Datasets tab. If you select that list during modeling setup, DataRobot creates the modeling data using only the features in that list.

To create a new list:

  1. From the Use Case, select the dataset you plan to model with to open the data preview.
  2. Click the dropdown at the top of the page and select + New feature list to open the Features view.

  3. Select the checkbox next to each feature you want to include in your custom list. Then, click Create feature list, enter a name and description (optional), and click Save changes.

Add experiment

From within a Use Case, click Add new and select Add experiment. The Set up new experiment page opens, which lists all data previously loaded to the Use Case.

Add data

Add data to the experiment, either by adding new data (1) or selecting a dataset that has already been loaded to the Use Case (2).

Once the data is loaded to the Use Case (option 2 above), click to select the dataset you want to use in the experiment. Workbench opens a preview of the data.

From here, you can:

Option Description
1
Click to return to the data listing and choose a different dataset.
2
Click the icon to proceed and set the target.
3
Click Next to proceed and set the target.

Select target

Once you have proceeded to target selection, Workbench prepares the dataset for modeling (EDA 1). When the process finishes, to set the target, either:

Scroll through the list of features to find your target. If it is not showing, expand the list from the bottom of the display:

Once located, click the entry in the table to use the feature as the target.

Type the name of the target feature you would like to predict in the entry box. DataRobot lists matching features as you type:

After the target is entered, Workbench displays a histogram providing information about the target feature's distribution and, in the right pane, a summary of the experiment settings.

From here, you are ready to build models with the default settings. Or, you can modify the default settings and then begin. If using the default settings, click Start modeling to begin the Quick mode Autopilot modeling process.

Customize basic settings

Changing experiment parameters is a good way to iterate on a Use Case. Before starting to model, you can change a variety of settings:

  Setting To change...
Positive class For binary classification projects only. The class to use when a prediction scores higher than the classification threshold.
Modeling mode the modeling mode, which influences the blueprints DataRobot chooses to train.
Optimization metric The optimization metric to one different from DataRobot's recommendation.
Training feature list The subset of features that DataRobot uses to build models.

After changing any or all of the settings described, click Start modeling to begin the Quick mode modeling process or customize more advanced settings.

Change modeling mode

By default, DataRobot builds experiments using Quick Autopilot. However, you can change the modeling mode to train specific blueprints or all applicable repository blueprints.

The following table describes each of the modeling modes:

Modeling mode Description
Quick (default) Using a sample size of 64%, Quick Autopilot runs a subset of models, based on the specified target feature and performance metric, to provide a base set of models that build and provide insights quickly.
Manual Manual mode gives you full control over which blueprints to execute. After EDA2 completes, DataRobot redirects you to the blueprint repository where you can select one or more blueprints for training.
Comprehensive Comprehensive Autopilot mode runs all Repository blueprints on the maximum Autopilot sample size to ensure more accuracy for models. This mode results in extended build times.

Change optimization metric

The optimization metric defines how DataRobot scores your models. After you choose a target feature, DataRobot selects an optimization metric based on the modeling task. Typically, the metric DataRobot chooses for scoring models is the best selection for your experiment. To build models using a different metric, overriding the recommended metric, use the Optimization metric dropdown:

See the reference material for a complete list and descriptions of available metrics.

Change feature list (pre-modeling)

Feature lists control the subset of features that DataRobot uses to build models. Workbench defaults to the Informative Features list, but you can modify that before modeling. To change the feature list, click the Feature list dropdown and select a different list:

You can also change the selected list on a per-model basis once the experiment finishes building.

Customize advanced settings

To apply more advanced modeling criteria before training, you can optionally:

Modify partitioning

Partitioning describes the method DataRobot uses to “clump” observations (or rows) together for evaluation and model building. Workbench defaults to five-fold cross-validation with stratified sampling and a 20% holdout fold.

Note

If there is a date feature available, your experiment is eligible for Date/time partitioning, which assigns rows to backtests chronologically instead of, for example, randomly. This is the only valid partitioning method for time-aware projects. See the time-aware modeling documentation for more information.

To change the partitioning method or validation type click the icon for Additional settings, Next, or the Partitioning field in the summary:

Set the partitioning method

The partitioning method instructs DataRobot in how to assign rows when training models. Note that the choice of partitioning method and validation type is dependent on the target feature and/or partition column. In other words, not all selections will always display as available. The following table briefly describes each method; see also this section for more partitioning details.

Method Description
Stratified Rows are randomly assigned to training, validation, and holdout sets, preserving (as close as possible to) the same ratio of values for the prediction target as in the original data. This is the default method for binary classification problems.
Random DataRobot randomly assigns rows to the training, validation, and holdout sets. This is the default method for regression problems.
User-defined grouping Creates a 1:1 mapping between values of this feature and validation partitions. Each unique value receives its own partition, and all rows with that value are placed in that partition. This method is recommended for partition features with low cardinality. See partition by grouping, below.
Automated grouping All rows with the same single value for the selected feature are guaranteed to be in the same training or test set. Each partition can contain more than one value for the feature, but each individual value will be automatically grouped together. This method is recommended for partition features with high cardinality. See partition by grouping, below.
Date/time See time-aware experiments.

Set the validation type

Validation type sets the method used on data to validate models. Choose a method and set the associated fields. A graphic below the configuration fields illustrates the settings. See the description of validation type when using user-defined or automated group partitioning.

Field Description
Cross-validation: Separates the data into two or more “folds” and creates one model per fold, with the data assigned to that fold used for validation and the rest of the data used for training.
Cross-validation folds Sets the number of folds used with the cross-validation method. A higher number increases the size of training data available in each fold; consequently increasing the total training time.
Holdout percentage Sets the percentage of data that Workbench “hides” when training. The Leaderboard shows a holdout value, which is calculated using the trained model's predictions on the holdout partition.
Training-validation-holdout: For larger datasets, partitions data into three distinct sections—training, validation, and holdout— with predictions based on a single pass over the data.
Validation percentage Sets the percentage of data that Workbench uses for validation of the trained model.
Holdout percentage Sets the percentage of data that Workbench “hides” when training. The Leaderboard shows a Holdout value, which is calculated using the trained model's predictions on the holdout partition.

Note

If the dataset exceeds 800MB, training-validation-holdout is the only available validation type for all partitioning methods.

Partition by grouping

While less common, user-defined and automated group partitioning provide a method for partitioning by partition feature—a feature from the dataset that is the basis of grouping.

  • With user-defined grouping, one partition is created for each unique value of the selected partition feature. That is, rows are assigned to partitions using the values of the selected partition feature, one partition for each unique value. When this method is selected, DataRobot recommends specifying a feature that has fewer than 10 unique values of the partition feature.

  • With automated grouping, all rows with the same single (specified) value of the partition feature are assigned to the same partition. Each partition can contain multiple values of that feature. When this method is selected, DataRobot recommends specifying a feature that has six or more unique values.

Once either of these methods are selected, you are prompted to enter the partition feature. Help text provides information on the number of values the partition feature must contain; click in the dropdown to view features with a unique value count.

After choosing a partition feature, set the the validation type. The applicability of validation type is dependent on the unique values for the partition features, as illustrated in the following chart.

Automated grouping uses the same validation settings as described above. User-defined grouping, however, prompts for values specific to the partition feature. For cross-validation, setting holdout is optional. If you do set it, you select a value of the partition feature instead of a percentage. For training-validation-holdout, select a value of the partition feature for each section, again instead of a percentage.

Configure additional settings

Choose the Additional settings tab to set more advanced modeling capabilities. Note that the Time series modeling tab will be available or greyed out depending on whether DataRobot found any date/time features in the dataset.

Configure the following, as required by your business use case.

Monotonic feature constraints

Monotonic constraints control the influence, both up and down, between variables and the target. In some use cases (typically insurance and banking), you may want to force the directional relationship between a feature and the target (for example, higher home values should always lead to higher home insurance rates). By training with monotonic constraints, you force certain XGBoost models to learn only monotonic (always increasing or always decreasing) relationships between specific features and the target.

Using the monotonic constraints feature requires creating special feature lists, which are then selected here. Note also that when using Manual mode, available blueprints are marked with a MONO badge to identify supporting models.

Weight

Weight sets a single feature to use as a differential weight, indicating the relative importance of each row. It is used when building or scoring a model—for computing metrics on the Leaderboard—but not for making predictions on new data. All values for the selected feature must be greater than 0. DataRobot runs validation and ensures the selected feature contains only supported values.

Insurance-specific settings

Several features are available that address frequent weighting needs of the insurance industry. The table below describes each briefly, but more detailed information can be found here.

Setting Description
Exposure In regression problems, sets a feature to be treated with strict proportionality in target predictions, adding a measure of exposure when modeling insurance rates. DataRobot handles a feature selected for Exposure as a special column, adding it to raw predictions when building or scoring a model; the selected column(s) must be present in any dataset later uploaded for predictions.
Count of Events Improves modeling of a zero-inflated target by adding information on the frequency of non-zero events.
Offset Adjusts the model intercept (linear model) or margin (tree-based model) for each sample; it accepts multiple features.

Change the configuration

You can make changes to the project's target or feature list before you begin modeling by returning to the Target page. To return, click the target icon, the Back button, or the Target field in the summary:

What's next?

After you start modeling, DataRobot populates the Leaderboard with models as they complete. You can:


Updated November 20, 2023
Back to top