Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Autopilot in time-aware projects

Note

See the AutoML modeling mode description for non-time-aware modeling.

Modeling modes define the automated model-building strategy—the set of blueprints run and the sampling size used. DataRobot selects and runs a predefined set of blueprints, based on the specified target and date/time feature, and then trains the blueprints on an ever-increasing portion of the training backtest partition. Running more models in the early stages and advancing only the top models to the next stage allows for greater model diversity and faster Autopilot runtimes.

The default, Quick (Autopilot), is a shortened and optimized version of the full Autopilot mode. Comprehensive mode, which can be quite time-intensive, runs all Repository blueprints. Manual mode allows you to choose blueprints and sample sizes. The sample percentage sizes used are based on the selected mode, which are described in the table below.

Note

For time series projects, the modeling mode defines the set of blueprints run but not the feature reduction process. Using Quick mode has additional implications for time series (not OTV) projects.

The following table defines the modeling percentages for the selectable modes for OTV projects. Time series projects run on 100% of data. All modes, by default, run on these feature lists:

Percentages listed refer to the percentage of total rows (rows are defined by duration or row count of the partition). Maximum number of rows is determined by project type. You can, however, train any model to any sample size from the Repository. Or, from the Leaderboard, retrain models to any size or change the training range and sample size using the New training period option.

Start mode Blueprint selection Sample size for each partition
Quick (default) Runs a subset of blueprints, based on the specified target feature and performance metric, to provide a base set of models and insights quickly. Models are directly trained at the maximum training size for each backtest, defined by the project's date/time partitioning.
Autopilot Runs on a larger selection of blueprints. Runs using sample sizes beginning with 25%, then 50%, and finally 100% on highest accuracy models of the previous phase.
Comprehensive Runs all Repository blueprints on the maximum sample size (100%) to ensure highest accuracy for models. This mode results in extended build times. Not available for time series or unsupervised projects. 100%
Manual Runs EDA2 and then provides a link to the blueprint Repository for full control over which models to run and at what sample size. Custom

Sample sizes differ when working with smaller datasets.

For example, when you start full Autopilot for an OTV project, DataRobot first selects blueprints optimized for your project based on the target and date/time feature selected. It then runs models using 25% of the data in Backtest 1. When those models are scored, DataRobot selects the top models and reruns them on 50% of the data. Taking the top models from that run, DataRobot runs on 100% of the data. Results of all model runs, at all sample sizes, are displayed on the Leaderboard. The data that comprises those samples is determined by the sampling method, either random (random x% rows within the same range) or latest (x% of latest rows within the backtest for row count or selected time period for duration).

Small datasets

Autopilot changes the sample percentages run depending on the number of rows in the dataset. The following table describes the criteria:

Number of rows Percentages run
Less than 2000 Final Autopilot stage only (100%)
Between 2001 and 3999 Final two Autopilot stages (50% and 100%)
4000 and larger All stages of Autopilot (25%, 50%, and 100%)

Why the sampling method matters

When you configure the backtest sampling method, the selection has an impact on backtesting configuration, model blending, and selecting the best model. Unlike AutoML, the model trained on the highest sample size might not be the best model. When using Random sampling, observable history remains the same on all sample sizes. In that case, DataRobot's behavior is similar to AutoML and Autopilot prefers models trained on higher sample sizes.

By contrast, using the latest sampling method implies a level of importance of historical data in model training. This is because in time-aware projects, going further back into historical data can have a significant effect on accuracy, either boosting it or introducing additional noise. When using Latest, Autopilot considers models trained on any sample size during its various stages (e.g., when retraining the best model on a reduced feature list or preparing for deployment).

When using duration or customized backtest ("project settings mode"), DataRobot uses a percentage of the time window sample. For row count mode, it uses the maximum rows used by the smallest backtest. You can see the mode/sampling/training type listed on the Leaderboard.

Other aspects of multistep OTV

The following sections describe aspects specific to time-aware modeling.

Blending

With multistep OTV, you can train on different sample sizes. This is because the top models may have been trained on different sample sizes. DataRobot does not blend models that use the blueprint and feature list (but different sample sizes) even if they are the highest scoring models.

Preparing for deployment

When preparing the best model for deployment, DataRobot retrains it on the most recent data by shifting the training period to the end of dataset and freezing parameters. The sampling method can affect how the model is prepared for deployment in a following manner:

  • if random, the model prepared for deployment uses the largest possible sample. For example, if the best model was trained on P1Y @ 50% (Random), the resulting model will be trained on the last P1Y in the dataset, with no sampling.
  • if latest, the exact training parameters are preserved. (In the same case above, the resulting model would be trained on P1Y @ 50% (Latest).)

Downscaling

When running Autopilot, DataRobot initially caps the sample size and downscales the dataset to 500 MB. If the estimated training size exceeds that amount, downscaling happens proportionately. In downscaled projects with random sampling, the model prepared for deployment will still be trained on 100% to maximize accuracy (despite the fact that Autopilot's max sample size is smaller). Additional frozen model will be trained on 100% of data within backtest to provide user with insights as close to prepared for deployment model as possible.Note the you can train any model to any sample size (exceeding 500 MB) from the Repository or retrain models to any size from the Leaderboard.

Feature reduction with time series

When using Quick mode in time series (not OTV) modeling, DataRobot applies a more aggressive feature reduction strategy, resulting in fewer derived features and therefore different types of blueprints available in the Repository.

This does not apply to unsupervised time series projects. In unsupervised, blueprint choice is the same between full Autopilot and Quick modes. The only difference for Quick is that the feature reduction threshold used affects the number of derived features used for the SHAP-based Reduced Features list.


Updated September 15, 2023