Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Date/time partitioning

Date/time partitioning is the underlying structure that supports both time series and out-of-time validation (OTV) modeling. In fact, OTV is date/time partitioning, with additional components such as sophisticated preprocessing and insights from the Accuracy over Time graph. The following section describes the workflow. See below to understand how DataRobot represents dates as well as details on the components of date/time partitioning (e.g., date formats, model deployment, gaps, backtests, etc.).

To activate time-aware modeling, your dataset must contain a column with a variable type “Date” for partitioning. If it does, the date/time partitioning feature becomes available through the Set up time-aware modeling link on the Start screen. After selecting a time feature, you can then use the Advanced options link to further configure your model build.

The following sections describe the date/time partitioning workflow.

Basic required options

To build time-aware models:

  1. Load your dataset (see the file size requirements) and select your target feature. If your dataset contains a date feature, the Set up time-aware modeling link activates. Click it to get started.

  2. From the dropdown, select the primary date/time feature. The dropdown lists all date/time features that DataRobot detected during EDA1.

  3. After selecting a feature, DataRobot computes and then loads a histogram of the time feature plotted against the target feature (feature-over-time). Note that if your dataset qualifies for multiseries modeling, this histogram represents the average of the time feature values across all series plotted against the target feature.

    You can explore what other features look like over time to view trends and determine whether there are gaps in your data (which is a data flaw you need to know about). To access these histograms, expand a numeric feature and click the Over Time option (click to compute, if necessary):

    You can interact with the Over Time chart in several ways, described below.

  4. Set the method to use for model building. If you select:

    • Out-of-Time Validation (OTV): Either start the build process immediately or set Advanced options and return to click Start.
    • Time Series Modeling: Configure the feature derivation and forecasting windows, as described in the section on time series modeling. Now, either start the build process immediately or set Advanced options and return to click Start.

Note

Consider retraining your model on the most recent data before final deployment.

Advanced options

Expand the Show Advanced options link to set details of the partitioning method. When you enable time-aware modeling, Advanced options opens to the date/time partitioning method by default. The Backtesting section of date/time partitioning provides tools for configuring backtests for your time-aware projects.

DataRobot detects the date and/or time format (standard GLIBC strings) for the selected feature. Verify that it is correct. If the format displayed does not accurately represent the date column(s) of your dataset, modify the original dataset to match the detected format and re-upload it.

Configure the backtesting partitions. You can set them from the dropdowns (applies global settings) or by clicking the bars in the visualization (applies individual settings). Individual settings override global settings. Once you modify settings for an individual backtest, any changes to the global settings are not applied to the edited backtest.

Global backtest partitions

The following table describes global settings:

Selection Description
Number of backtests (1) Configures the number of backtests for your project, the time series equivalent of cross-validation (but based on time periods or durations instead of random rows).
Validation length (2) Configures the size of the testing data partition.
Gap length (3) Configures spaces in time, representing gaps between model training and model deployment.
Sampling method (4) Sets whether to use duration or rows as the basis for partitioning, and whether to use random or latest data.

See the table below for a description of the backtesting section's display elements.

Note

When changing partition year/month/day settings, note that the month and year values rebalance to fit to the larger class (for example, 24 months becomes two years) when possible. However, because DataRobot cannot account for leap years or days in a month as it relates to your data, it cannot convert days into the large container.

Number of backtests

You can change the number of backtests, if desired. The default number of backtests is dependent on the project parameters, but you can configure up to 20. Before setting the number of backtests, use the histogram to validate that the training and validation sets of each fold will have sufficient data to train a model. See the dataset requirements](file-types#otv-backtest-requirements) when configuring backtests. If you set a number of backtests that results in any of the runs not meeting data criteria, DataRobot only runs the number of backtests that do meet the minimums (and marks the display with an asterisk).

By default, DataRobot creates a holdout fold for training models in your project. In some cases, however, you may want to create a project without a holdout set. To do so, uncheck the Add Holdout fold box. If you disable the holdout fold, the holdout score column does not appear on the Leaderboard (and you have no option to unlock holdout). Any tabs that provide an option to switch between Validation and Holdout will not show the Holdout option.

Note

If you build a project with a single backtest, the Leaderboard does not display a backtest column.

Validation length

To modify the duration, perhaps because of a warning message, click the dropdown arrow in the Validation length box and enter duration specifics. Validation length can also be set by clicking the bars in the visualization. Note the change modifications make in the testing representation:

Gap length

Optionally, set the gap length from the Gap Length dropdown. Initially set to zero, DataRobot does not process a gap in testing. When set, DataRobot excludes the data that falls in the gap from use in training or evaluation of the model. Gap length can also be set by clicking the bars in the visualization.

Rows or duration

By default, DataRobot ensures that each backtest has the same duration, either the default or the values set from the dropdown(s) or via the bars in the visualization. If you want the backtest to use the same number of rows, instead of the same length of time, use the Equal rows per backtest toggle:

Time series projects also have an option to set row or duration for the training data, used as the basis for feature engineering. This setting controls the mechanism used to assign data to partitions.

Once you have selected the mechanism/mode for assigning data to backtests, select the sampling method, either Random or Latest, to select how to assign rows from the dataset.

Setting the sampling method is particularly useful if a dataset is not distributed equally over time. For example, if data is skewed to the most recent date, the results of using 50% of random rows versus 50% of the latest will be quite different. By selecting the data more precisely, you have more control over the data that DataRobot trains on.

Individual backtest partitions

If you don't modify any settings, DataRobot disperses rows to backtests equally. However, you can customize an individual backtest's gap, training, validation, and holdout data by clicking the corresponding bar or the pencil icon () in the visualization. Note that:

  • You can only set holdout in the Holdout backtest ("backtest 0"), you cannot change the training data size in that backtest.

  • When Equal rows per backtest is checked (which sets the partitions to row-based assignment) only the Training End date is applicable.

  • When Equal rows per backtest is checked, the dates displayed are informative only (that is, they are approximate) and they include padding that is set by the feature derivation and forecast point windows.

Regardless of whether you are setting training, gaps, validation, or holdout, elements of the editing screens function the same. Hover on a data element to display a tooltip that reports specific duration information:

Click a section (1) to open the tool for modifying the start and/or end the dates; click in the box (2) to open the calendar picker.

Triangle markers provide indicators of corresponding boundaries. The larger blue triangle () marks the active boundary—the boundary that will be modified if you apply a new date in the calendar picker. The smaller orange triangle () identifies the other boundary points that can be changed but are not currently selected.

The current duration for training, validation, and gap (if configured) is reported under the date entry box:

Once you have made changes to a data element, DataRobot adds an EDITED label to the backtest.

There is no way to remove the EDITED label from a backtest, even if you manually reset the durations back to the original settings. If you want to be able to apply global duration settings across all backtests, copy the project and restart.

Training and validation

To modify the duration of the training or validation data for an individual backtest:

  1. Click in the backtest to open the calendar picker tool.
  2. Click the triangle for the element you want to modify—options are training start (default), training end/validation start, or validation end.
  3. Modify dates as required.

Modify gaps

A gap is a period between the end of the training set and the start of the validation set, resulting in data being intentionally ignored during model training. You can set the gap length globally or for an individual backtest.

To set a gap, add time between training end and validation start. You can do this by ending training sooner, starting validation later or both. To set a gap:

  1. Click the triangle at the end of the training period.

  2. Click the Add Gap link.

    DataRobot adds an additional triangle marker. Although they appear next to each other, both the selected (blue) and inactive (orange) triangles represent the same date. They are slightly spaced to make them selectable.

  3. Optionally, set the Training End Date using the calendar picker. The date you set will be the beginning of the gap period (training end = gap start).

  4. Click the orange Validation Start Date marker; the marker changes to blue, indicating that it's selected.

  5. Optionally, set the Validation Start Date (validation start = gap end).

The gap is represented by a yellow band; hover over the band to view the duration.

Holdout duration

To modify the holdout length, click in the red (holdout area) of backtest 0, the holdout partition. Click the displayed date in the Holdout Start Date to open the calendar picker and set a new date. If you modify the holdout partition and the new size results in potential problems, DataRobot displays a warning icon next to the Holdout fold. Click the warning icon () to expand the dropdown and reset the duration/date fields.

Lock the duration

You may want to make backtest date changes without modifying the duration of the selected element. You can lock duration for training, for validation, or for the combined period. To lock duration, click the triangle at one end of the period. Next, hold the Shift key and select the triangle at the other end of the locked duration. DataRobot opens calendar pickers for each element:

Change the date in either entry. Notice that the other date updates to mirror the duration change you made.

Interpret the display

The date/time partitioning display represents the training and validation data partitions as well as their respective sizes/durations. Use the visualization to ensure that your models are validating on the area of interest. The chart shows, for each backtest, the specific time period of values for the training, validation, and if applicable, holdout and gap data. Specifically you can observe, for each backtest, whether the model will be representing an interesting or relevant time period. Will the scores represent a time period you care about? Is there enough data in the backtest to make the score valuable?

The following table describes elements of the display:

Element Description
Observations The binned distribution of values (i.e., frequency), before downsampling, across the dataset. This is the same information as displayed in the feature’s histogram.
Available Training Data The blue color bar indicates the training data available for a given fold. That is, all available data minus the validation or holdout data.
Primary Training Data The dashed outline indicates the maximum amount of data you can train on to get scores from all backtest folds. You can later choose any time window for training, but depending on what you select, you may not then get all backtest scores. (This could happen, for example, if you train on data greater than the primary training window.) If you train on data less than or equal to the Primary Training Data value, DataRobot completes all backtest scores. If you train on data greater than this value, DataRobot runs fewer tests and marks the backtest score with an asterisk (*). This value is dependent on (changed by) the number of configured backtests.
Gap A gap between the end of the training set and the start of the validation set, resulting in the data being intentionally ignored during model training.
Validation A set of data indicated by a green bar that is not used for training (because DataRobot selects a different section at each backtest). It is similar to traditional validation, except that it is time based. The validation set starts immediately at the end of the primary training data (or the end of the gap).
Holdout (only if Add Holdout fold is checked) The reserved (never seen) portion of data used as a final test of model quality once the model has been trained and validated. When using date/time partitioning, holdout is a duration or row-based portion of the training data instead of a random subset. By default, the holdout data size is the same as the validation data size and always contains the latest data. (Holdout size is user-configurable, however.)
Backtestx Time- or row-based folds used for training models. The Holdout backtest is known as "backtest 0" and labeled as Holdout in the visualization. For small datasets and for the highest scoring model from Autopilot, DataRobot runs all backtests. For larger datasets, the first backtest listed is the one DataRobot uses for model building. Its score is reported in the Validation column of the Leaderboard. Subsequent backtests are not run until manually initiated on the Leaderboard.

Additionally, the display includes Target Over Time and Observations histograms. Use these displays to visualize the span of times where models are compared, measured, and assessed—to identify "regions of interest." For example, the displays help to determine the density of data over time, whether there are gaps in the data, etc.

In the displays, the green represents the selection of data that DataRobot is validating the model on. The "All Backtest" score is the average of this region. The gradation marks each backtest and its potential overlap with training data.

Study the Target Over Time graph to find interesting regions where there is some data fluctuation. It may be interesting to compare models over these regions. Use the Observations chart to determine whether, roughly speaking, the amount of data in a particular backtest is suitable.

Finally, you can click the red, locked holdout section to see where in the data the holdout scores are being measured and whether it is a consistent representation of your dataset.

Build time-aware models

Once you click Start, DataRobot begins the model building process and returns results to the Leaderboard.

Note

Model parameter selection has not been customized for date/time-partitioned projects. Though automatic parameter selection yields good results in most cases, Advanced Tuning may meaningfully improve performance for some projects that use the Date/Time partitioning feature.

Date duration features

Because having raw dates in modeling can be risky (overfitting, for example, or tree-based models that do not extrapolate well), DataRobot generally excludes them from the Informative Features list if date transformation features were derived. Instead, for OTV projects, DataRobot creates duration features calculated from the difference between date features and the primary date. It then adds the duration features to an optimized Informative Features list. The automation process creates:

  • new duration features
  • new feature lists

New duration features

When derived features (hour of day, day of week, etc.) are created, the feature type of the newly derived features are not dates. Instead, they become categorical or numeric, for example. To ensure that models learn time distances better, DataRobot computes the duration between primary and non-primary dates, adds that calculation as a feature, and then drops all non-primary dates.

Specifically, when date derivations happen in an OTV project, DataRobot creates one or more new features calculated from the duration between dates. The new feature are named duration(<from date>, <to date>), where the <from date> is the primary date. The var type, displayed on the Data page, displays Date Duration.

The transformation applies even if the time units differ. In that case, DataRobot computes durations in seconds and displays the information on the Data page (potentially as huge integers). In some cases the value is negative because the <to date> may be before the primary date.

New feature lists

The new feature lists, automatically created based on Informative Features and Raw Features, are a copy of the original with the duration feature(s) added. They are named the same, but with "optimized for time-aware modeling" appended. (For univariate feature lists, duration features are only added if the original date feature was part of the original univariate list.)

When you run full or Quick Autopilot, new feature lists are created later in the EDA2 process. DataRobot then switches the Autopilot process to use the new, optimized list. To use one of the non-optimized lists, you must rerun Autopilot specifying the list you want.

Time-aware Leaderboard models

While most elements of the Leaderboard are the same, DataRobot's calculation and assignment of recommended models differs, as described below. Also, the Sample Size function is different for date/time-partitioned models. Instead of reporting the percentage of the dataset used to build a particular model, the default display lists, under Feature List & Sample Size, the sampling method (random/latest) and either:

  • the start/end date (either manually added or automatically assigned for the recommended model:

  • the duration used to build the model:

  • the number of rows:

  • the Project Settings label, indicating custom backtest configuration:

You can filter the Leaderboard display on the time window sample percent, sampling method, and feature list using the dropdown available from the Feature List & Sample Size. Use this to, for example, easily select models in a single Autopilot stage.

Autopilot does not optimize the amount of data used to build models when using Date/Time partitioning. Different length training windows may yield better performance by including more data (for longer model-training periods) or by focusing on recent data (for shorter training periods). You may improve model performance by adding models based on shorter or longer training periods. You can customize the training period with the Add a Model option on the Leaderboard.

Another partitioning-dependent difference is the origination of the Validation score. With date partitioning, DataRobot initially builds a model using only the first backtest (the partition displayed just below the holdout test) and reports the score on the Leaderboard. When calculating the holdout score (if enabled) for row count or duration models, DataRobot trains on the first backtest, freezes the parameters, and then trains the holdout model. In this way, models have the same relationship (i.e., end of backtest 1 training to start of backtest validation will be equivalent in duration to end of holdout training data to start of holdout).

Note, however, that backtesting scores are dependent on the sampling method selected. DataRobot only scores all backtests for a limited number of models (you must manually run others). The automatically run backtests are based on:

  • With random, DataRobot always backtests the best blueprints on the max available sample size. For example, if BP0 on P1Y @ 50% has the best score, and BP0 has been trained on P1Y@25%, P1Y@50% and P1Y (the 100% model), DataRobot will score all backtests for BP0 trained on P1Y.

  • With latest, DataRobot preserves the exact training settings of the best model for backtesting. In the case above, it would score all backtests for BP0 on P1Y @ 50%.

Note that when the model used to score the validation set was trained on less data than the training size displayed in the Leaderboard, the score displays an asterisk. This happens when training size is equal to full size minus holdout.

Just like cross-validation, you must initiate a separate build for the other configured backtests (if you initially set the number of backtest to greater than 1). Click a model’s Run link from the Leaderboard, or use Run All Backtests for Selected Models from the Leaderboard menu. (You can use this option to run backtests for a single or multiple models at one time.)

The resulting score displayed in the All Backtests column represents an average score for all backtests. See the description of Model Info for more information on backtest scoring.

Training period

Note

Consider retraining your model on the most recent data before final deployment.

You can change the training range and sampling rate and then rerun a particular model for date-partitioned builds. Note that you cannot change the duration of the validation partition once models have been built; that setting is only available from the Advanced options link before building has started. Click the plus sign (+) to open the New Training Period dialog:

The New Training Period box has multiple selectors, described in the table below:

Selection Description
Frozen run toggle (1) Freeze the run
Training mode (2) Rerun the model using a different training period (2). Before setting this value, see the details of row count vs. duration and how they apply to different folds.
Snap to (3) "Snap to" predefined points, to facilitate entering values and avoid manually scrolling or calculation.
Enable time window sampling (4) Train on a subset of data within a time window for a duration or start/end training mode. Check to enable and specify a percentage.
Sampling method (5) Select the sampling method used to assign rows from the dataset.
Summary graphic (6) View a summary of the observations and testing partitions used to build the model.
Final Model (6) View an image that changes as you adjust the dates, reflecting the data to be used in the model you will make predictions with (see the note below).

Once you have set a new value, click Run with new training period. DataRobot builds the new model and displays it on the Leaderboard.

Setting the duration:

To change the training period a model uses, select the Duration tab in the dialog and set a new length. Duration is measured from the beginning of validation working back in time (to the left). With the Duration option, you can also enable time window sampling.

DataRobot returns an error for any period of time outside of the observation range. Also, the units available depend on the time format (for example, if the format is %d-%m-%Y, you won't have hours, minutes, and seconds).

Setting the row count:

The row count used to build a model is reported on the Leaderboard as the Sample Size. To vary this size, Click the Row Count tab in the dialog and enter a new value.

Setting the start and end dates:

If you enable Frozen run by clicking the toggle, DataRobot re-uses the parameter settings it established in the original model run on the newly specified sample. Enabling Frozen run unlocks a third training criteria, Start/End Date. Use this selection to manually specify which data DataRobot uses to build the model. With this setting, after unlocking holdout, you can train a model into the Holdout data. (The Duration and Row Count selectors do not allow training into holdout.) Note that if holdout is locked and you overlap with this setting, model building will fail. With the start and end dates option, you can also enable time window sampling.

When setting start and end dates, note the following:

  • DataRobot does not run backtests because some of the data may have been used to build the model.
  • The end date is excluded when extracting data. In other words, if you want data through December 31, 2015, you must set end-date to January 1, 2016.
  • If the validation partition (set via Advanced options before initial model build) occurs after the training data, DataRobot displays a validation score on the Leaderboard. Otherwise, the Leaderboard displays N/A.
  • Similarly, if any of the holdout data is used to build the model, the Leaderboard displays N/A for the Holdout score.
  • Date/time partitioning does not support dates before 1900.

Click Start/End Date to open a clickable calendar for setting the dates. The dates displayed on opening are those used for the existing model. As you adjust the dates, check the Final model graphic to view the data your model will use.

Time window sampling

If you do not want to use all data within a time window for a date/time-partitioned project, you can train on a subset of data within a time window specification. To do so, check the Enable Time Window sampling box and specify a percentage. DataRobot will take a uniform sample over the time range using that percentage of the data. This feature helps with larger datasets that may need the full time window to capture seasonality effects, but could otherwise face runtime or memory limitations.

Summary information

Once models are built, use the Model Info tab for model overview, backtest summary, and resource usage information.

Some notes:

  • Hover over the folds to display rows, dates, and duration as they may differ from the values shown on the Leaderboard. The values displayed are the actual values DataRobot used to train the model. For example, suppose you request a Start/End Date model from 6/1/2015 to 6/30/2015 but there is only data in your dataset from 6/7/2015 to 6/14/2015. The hover display indicates the actual dates, 6/7/2015 through 6/15/2015, for start and end dates, with a duration of eight days.

  • The Model Overview is a summary of row counts from the validation fold (the first fold under the holdout fold)

  • If you created duration-based testing, the validation summary could result in differences in numbers of rows. This is because the number of rows of data available for a given time period can vary.

  • A message of Not Yet Computed for a backtest indicates that there was not available data for the validation fold (for example, because of gaps in the dataset). In this case, where all backtests were not completed, DataRobot displays an asterisk on the backtest score.

  • The “reps” listed at the bottom correspond to the backtests above and are ordered in the sequence in which they finished running.

More info...

The following sections provide details on using the date/time partitioning feature:

Understand a feature's Over Time chart

The Over time chart helps you identify trends and potential gaps in your data by displaying, for both the original modeling data and the derived data, how a feature changes over the primary date/time feature. It is available for all time-aware projects (OTV, single series, and multiseries). For time series, it is available for each user-configured forecast distance.

Using the page's tools, you can focus on specific time periods. Display options for OTV and single-series projects differ than those of multiseries. Note that to view the Over time chart you must first compute chart data. Once computed:

  1. Set the chart's granularity. The resolution options are auto-detected by DataRobot. All project types allow you to set a resolution (this option is under Additional settings for multiseries projects).

  2. Toggle the histogram display on and off to see a visualization of the bins DataRobot is using for EDA1.

  3. Use the date range slider below the chart to highlight a specific region of the time plot. For smaller datasets, you can drag the sliders to a selected portion. Larger data sets use block pagination.

  4. For multiseries projects , you can set both the forecast distance and an individual series (or average across series) to plot:

For time series projects, the Data page also provides a Feature Lineage chart to help understand the creation process for derived features.

Partition without holdout

Sometimes, you may want to create a project without a holdout set, for example, if you have limited data points. Date/time partitioning projects have a minimum data ingest size of 140 rows. If Add Holdout fold is not checked, minimum ingest becomes 120 rows.

By default, DataRobot creates a holdout fold. When you toggle the switch off, the red holdout fold disappears from the representation (only the backtests and validation folds are displayed) and backtests recompute and shift to the right. Other configuration functionality remains the same—you can still modify the validation length and gap length, as well as the number of backtests. On the Leaderboard, after the project builds, you see validation and backtest scores, but no holdout score or Unlock Holdout option.

The following lists other differences when you do not create a holdout fold:

  • Both the Lift Chart and ROC Curve can only be built using the validation set as their Data Source.
  • The Model Info tab shows no holdout backtest and or warnings related to holdout.
  • You can only compute predictions for All data and the Validation set from the Predict tab.
  • The Learning Curves graph does not plot any models trained into Validation or Holdout.
  • Model Comparison uses results only from validation and backtesting.

When backtesting is finished, one of the models—the most accurate individual, non-blender model—is selected and then prepared for deployment. The resulting prepared model is marked with the Recommended for Deployment badge. The following describes the preparation process for time-aware projects:

  1. First, DataRobot calculates feature impact for the selected model and uses it to generate a reduced feature list.

  2. Next, the app retrains the selected model on the reduced feature list. (If the selected model is a start/end model, because it is frozen, it will not be retrained on the reduced feature list or most recent data.)

  3. If the new model performs better than the original model, DataRobot then retrains the better scoring model on the most recent data (using the same duration/row count as original model). If using duration, and the equivalent period does not provide enough rows for training, DataRobot extends it until the minimum is met.

Note that there are two exceptions for time series models:

  • Feature reduction cannot be run for baseline (naive) or ARIMA models. This is because they only use date+naive predictions features (i.e., there is nothing to reduce).
  • Because they don't use weights to train and don't need retraining, baseline (naive) models are not retrained on the most recent data.

Access tab insights

Retraining a model on the most recent data* results in the model not having out-of-sample predictions, which is what many of the Leaderboard insights rely on. That is, the child (recommended and rebuilt) model trained with the most recent data has no additional samples with which to score the retrained model. Because insights are a key component to both understanding DataRobot's recommendation and facilitating model performance analysis, DataRobot links insights from the parent (original) model to the child (frozen) model.

* This situation is also possible when a model is trained into holdout ("slim-run" models also have no stacked predictions).

The insights affected are:

  • ROC Curve
  • Lift Chart
  • Confusion Matrix
  • Stability
  • Forecast Accuracy
  • Series Insights
  • Accuracy Over Time
  • Feature Effect\Feature Fit

Final models

The original ("final") model is trained without holdout data and therefore does not have the most recent data. Instead, it represents the first backtest. This is so that predictions match the insights, coefficients, and other data displayed in the tabs that help evaluate models. (You can verify this by checking the Final model representation on the New Training Period dialog to view the data your model will use.) If you want to use more recent data, retrain the model using start and end dates.

Retraining before deployment

Once you have selected a model and unlocked holdout, you may want to retrain the model (although with hyperparameters frozen) to ensure predictive accuracy. Because the original model was trained without the holdout data, it therefore did not have the most recent data. You can verify this by checking the Final model representation on the New Training Period dialog to view the data your model will use. To retrain the model, do the following:

  1. On the Leaderboard, click the plus sign (+) to open the New Training Period dialog and change the training period.

  2. View the final model and determine whether your model is trained on the most up-to-date data.

  3. Enable Frozen run by clicking the slider.

  4. Select Start/End Date and enter the dates that for the retraining, including the dates of the holdout data. Remember to use the “+1” method (enter the date immediately after the final date you want included).

Date/date ranges

DataRobot uses date points to represent dates and date ranges within the data, applying the following principles:

  • All date points adhere to ISO 8601, UTC (e.g., 2016-05-12T12:15:02+00:00), an internationally accepted way to represent dates and times, with some small variation in the duration format. Specifically, there is no support for ISO weeks (e.g., P5W).

  • Models are trained on data between two ISO dates. DataRobot displays these dates as a date range, but inclusion decisions and all key boundaries are expressed as date points. When you specify a date, DataRobot includes start dates and excludes end dates.

  • Once changes are made to formats using the date partitioning column, DataRobot converts all charts, selectors, etc. to this format for the project.

Gaps

Configuring gaps allows you to reproduce time gaps usually observed between model training and model deployment (a period for which data is not to be used for training). It is useful in cases where, for example:

  • only older data is available for training (because ground truth is difficult to collect)
  • when a model’s validation and subsequent deployment takes weeks or months
  • to deliver predictions in advance for review or actions

A simple example: in insurance, it can take roughly a year for a claim to "develop" (the time between filing and determining the claim payout). For this reason, an actuary is likely to price 2017 policies based on models trained with 2015 data. To replicate this practice, you can insert a one-year gap between the training set and the validation set. This ensures that model evaluation is more correct. Other examples include when pricing needs regulator approval, retail sales for a seasonal business, and pricing estimates that rely on delayed reporting.

Backtests

Backtesting is conceptually the same as cross-validation in that it provides the ability to test a predictive model using existing historical data. That is, you can evaluate how the model would have performed historically to estimate how the model will perform in the future. Unlike cross-validation, however, backtests allow you to select specific time periods or durations for your testing instead of random rows, creating in-sequence, instead of randomly sampled, “trials” for your data. So, instead of saying “break my data into 5 folds of 1000 random rows each,” with backtests you say “simulate training on 1000 rows, predicting on the next 10. Do that 5 times.” Backtests simulate training the model on an older period of training data, then measure performance on a newer period of validation data. After models are built, through the Leaderboard you can change the training range and sampling rate. DataRobot then retrains the models on the shifted training data.

If the goal of your project is to predict forward in time, backtesting gives you a better understanding of model performance (on a time-based problem) than cross-validation. For time series problems, this equates to more confidence in your predictions. Backtesting confirms model robustness by allowing you to see whether a model consistently outperforms other models across all folds.

The number of backtests that DataRobot defaults to is dependent on the project parameters, but you can configure the build to include up to 20 backtests for additional model accuracy. Additional backtests provide you with more trials of your model so that you can be more sure about your estimates. You can carefully configure the duration and dates so that you can, for example, generate “10 two-month predictions.” Once configured to avoid specific periods, you can ask “Are the predictions similar?” or for two similar months, “Are the errors the same?”

Large gaps in your data can make backtesting difficult. If your dataset has long periods of time without any observed data, it is prudent to review where these gaps fall in your backtests. For example, if a validation window has too few data points, choosing a longer data validation window will ensure more reliable validation scores. While using more backtests may give you a more reliable measure of model performance, it also decreases the maximum training window available to the earliest backtest fold.


Updated December 1, 2021
Back to top