# Time-aware considerations

> Time-aware considerations - This page describes considerations to be aware of when working with
> DataRobot time series modeling.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-01T23:10:48.119654+00:00` (UTC).

## Primary page

- [Time-aware considerations](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html): Full documentation for this topic (HTML).

## Sections on this page

- [Date/time partitioning considerations](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#datetime-partitioning-considerations): In-page section heading.
- [Time series-specific considerations](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#time-series-specific-considerations): In-page section heading.
- [Accuracy](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#accuracy): In-page section heading.
- [Anomaly Detection](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#anomaly-detection): In-page section heading.
- [Data prep tool](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#data-prep-tool): In-page section heading.
- [Data Quality](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#data-quality): In-page section heading.
- [Monotonic constraints](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#monotonic-constraints): In-page section heading.
- [Productionalization](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#productionalization): In-page section heading.
- [Scale](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#scale): In-page section heading.
- [Trust](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#trust): In-page section heading.
- [Multiseries considerations](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#multiseries-considerations): In-page section heading.
- [Clustering considerations](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#clustering-considerations): In-page section heading.
- [Segmented modeling considerations](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#segmented-modeling-considerations): In-page section heading.
- [Combined Model deployment considerations](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#combined-model-deployment-considerations): In-page section heading.
- [Release 6.0 and earlier](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-consider.html#release-60-and-earlier): In-page section heading.

## Related documentation

- [Reference documentation](https://docs.datarobot.com/en/docs/reference/index.html): Linked from this page.
- [Predictive AI reference](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/index.html): Linked from this page.
- [Time series reference](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/index.html): Linked from this page.
- [date/time partitioning](https://docs.datarobot.com/en/docs/classic-ui/modeling/time/ts-adv-modeling/ts-date-time.html): Linked from this page.
- [file requirements](https://docs.datarobot.com/en/docs/reference/data-ref/file-types.html): Linked from this page.
- [Scoring code support](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/predictions/download-classic.html#scorecode-intro): Linked from this page.
- [Enable cross-series feature generation](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-adv-opt.html#enable-cross-series-feature-generation): Linked from this page.
- [considerations](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/unsupervised/clustering.html#clustering-for-time-aware-projects): Linked from this page.
- [Make Predictions](https://docs.datarobot.com/en/docs/classic-ui/modeling/time/ts-predictions.html#make-predictions-tab): Linked from this page.
- [different options available](https://docs.datarobot.com/en/docs/classic-ui/modeling/time/ts-flow-overview.html#project-types): Linked from this page.
- [Forecast Window](https://docs.datarobot.com/en/docs/reference/glossary/index.html#forecast-window): Linked from this page.
- [Accuracy Over Time](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/aot-classic.html): Linked from this page.

## Documentation content

# Time-aware considerations

Both time-aware modeling mechanisms—OTV and automated time series—are implemented using [date/time partitioning](https://docs.datarobot.com/en/docs/classic-ui/modeling/time/ts-adv-modeling/ts-date-time.html). Therefore, the date/time partitioning notes apply to all time-aware modeling. See also:

- Time series-specific considerations
- Multiseries considerations
- Clustering (time series-specific) considerations
- Segmented modeling

See the documented [file requirements](https://docs.datarobot.com/en/docs/reference/data-ref/file-types.html) for information on file size and series limit considerations.

> [!NOTE] Note
> Considerations are listed beginning with newest additions for easier identification.

## Date/time partitioning considerations

- Frozen thresholds are not supported.
- Blenders that contain monotonic models do not display the MONO label on the Leaderboard for OTV projects.
- When previewing predictions over time, the interval only displays for models that haven’t been retrained (for example, it won’t show up for models with theRecommended for Deploymentbadge).
- If you configure long backtest durations, DataRobot will still build models, but will not run backtests in cases where there is not enough data. In these case, the backtest score will not be available on the Leaderboard.
- Timezones on date partition columns are ignored. Datasets with multiple time zones may cause issues. The workaround is to convert to a single time zone outside of DataRobot. Also there is no support for daylight savings time.
- Dates before 1900 are not supported. If necessary, shift your data forward in time.
- Leap seconds are not supported.

## Time series-specific considerations

In addition to the above items, consider the following when working with time series projects:

- Accuracy
- Anomaly Detection
- Data prep tool
- Data Quality
- Monotonic constraints
- Productionalization
- Scale
- Trust

### Accuracy

- DeepAR:
- Temporal hierarchical models:
- Nowcasting:
- Feature Effects, Compliance documentation, and Prediction Explanations are not supported for autoregressive models (Traditional Time Series (TTS) and deep learning models). This includes:
- Other autoregressive modelers such as Prophet, TBATs, and ETS.

### Anomaly Detection

- Model comparison:
- Multistage OTV is not available for unsupervised projects.
- The anomaly threshold for theAnomaly Over Timechart is fixed at 0.5 for per-series kind blueprints. Non-per-series blueprints will use a computed threshold, which is dynamic.
- The Anomaly Assessment Insight:

### Data prep tool

Consider the following when doing gap handling and aggregation:

- Data prep is not supported for deployments or for use with the API.
- Only numeric targets are supported.
- Only numeric, categorical, text, and primary date columns are included in the output.
- The smallest allowed time step for aggregation is one minute.
- Datasets added to the AI catalog prior to introduction of the data prep tool are not eligible. Re-upload datasets to apply the tool.
- Shared deployments do not support automatic application of the transformed data prep dataset for predictions.

### Data Quality

- Check for leading-trailing zeros only runs when less than 80% of target values are zeros.

### Monotonic constraints

- XGBoost is the only supported model.
- While you can create a monotonic feature list after project creation with any numeric post-derivation feature, if you specified a raw feature list as monotonic before project creation, all features in it will be marked as Do not Derive (DND).
- When there is an offset in the blueprint, for example naive predictions, the final predictions may not be monotonic after offset is applied. The XGBoost itself honors monotonicity.
- If the model is a collection of models, like per-series XGBoost or performance-clustered blueprint, monotonicity is preserved per series/cluster.

### Productionalization

- Prediction Explanations:
- ARIMA, LSTM, and DeepAR models cannot be deployed to prediction servers. Instead, deploy using either:
- Scoring code supportrequires the following feature flags: Enable Scoring Code, Enable Scoring Code support for Keras Models (if needed)
- Time series batch predictions are not available for cross-series projects or traditional time series models (such as ARIMA).
- The ability to create a job definition for all ARIMA and non-ARIMA cross-series models is disabled whenEnable cross-series feature generationis enabled.

### Scale

- For temporal hierarchical models, theFeature Over Timechart may look different from the data used at the edges of the partitions for the temporal aggregate.
- When using configurable model parallelization (Customizable FD splits), if one parallel job is deleted during Autopilot, the remaining model split jobs will error.
- 10GB OTV requires multistep OTV be enabled.

### Trust

- Model Comparison (over time) shows the first 1000 series only. The insight does not support synchronization with job computation status and is only able to show completely precomputed data.
- Forecast vs Actuals (FvsA) chart:
- Accuracy over Time (AOT) chart:
- When handling data quality issues in Numeric Data Cleansing, some models can experience performance regression.
- CSV Export is not available for “All Backtest” in the Forecast vs Actuals chart.

## Multiseries considerations

In addition to the general time series considerations above, be aware:

- The Feature Association Matrix is not supported.
- Most multiseries UI insights and plots support up to 1000 series. For large datasets, however, some insights must be calculated on-demand, per series.
- Multiseries supports a single (1) series ID column.
- Multiseries ID values should be either all numeric or all strings. Blank or float data type series ID values are not fully supported.
- Multiseries does not support Prophet blueprints.

## Clustering considerations

- Clustering is only available for multiseries time series projects. Your data must contain a time index and at least 10 series.
- To create Xclusters, you need at least Xseries, each with 20+ time steps. (For example, if you specify 3 clusters, at least three of your series must be a length of 20 time steps or more.)
- Building from the union of all selected series, the union needs to collectively span at least 35 time steps.
- At least two clusters must be discovered for the clustering model to be used in a segmented modeling run. What does it mean to "discover" clusters?To build clusters, DataRobot must be able to group data into two or more distinct groups. For example, if a dataset has 10 series but they are all copies of the same single series, DataRobot would not be able to discover more than one cluster. In a more realistic example, very slight time shifts of the same data will also not be discoverable. If all the data is too mathematically similar that it cannot be separated into different clusters, then it cannot subsequently be used by segmentation.The "closeness" of the data is model-dependent—the convergence conditions are different. Velocity clustering would not converge if a project has 10 series, all with the same means. That, however, does not imply that K-means itself wouldn't converge.Note, however, the restrictions are less strict if clusters arenotbeing used for segmentation.

## Segmented modeling considerations

- Projects are limited to 100 segments; all segments must total less than 1GB (5GB with feature flag, contact your DataRobot representative).
- Predictions are only available when using theMake Predictionstab on the Combined Model's Leaderboard or via the API.
- Time series clustering projects are supported. See the associatedconsiderations.

### Combined Model deployment considerations

Consider the following when working with segmented modeling deployments:

- Time series segmented modeling deployments do not support data drift monitoring.
- Automatic retraining for segmented deployments that use clustering models is disabled; retraining must be done manually.
- Retraining can be triggered by accuracy drift in a Combined Model; however, it doesn't support monitoring accuracy in individual segments or retraining individual segments.
- Combined model deployments can include standard model challengers.

## Release 6.0 and earlier

- For theMake Predictionstab:
- Classification models are not optimized for rare events, and should have >15% frequency for their minority label.
- Run Autoregressive models using the "Baseline Only" feature list. Using other feature lists could cause Feature Effects or compliance documentation to fail, as the autoregressive models do not use the additional features that are part of the larger default lists and they are not designed to work with them.
- Feature Effects and Compliance documentation are disabled for LSTM/DeepAR blueprints.
- Eureqa with Forecast Distance is limited to 15 FD values. They will only run on smaller datasets with fewer than 100K rows or if the total number of levels for the categorical features is less than 1000. Their grid search plots in Advance Tuning marks only the single best grid search point, independent of the FD value. The blueprint can take a long time to complete if thetask sizeparameter is set too large.
- Forecast distance blenders are limited to projects with a maximum of 50 FDs.
- The "Forecast distance" selector on theCoefficientstab is not available for backtests and models that do not use ForecastDistanceMixin, for example, ARIMA models.
- Monthly differencing on daily datasets can only be triggered through detection. Currently, there is no support to specify monthly seasonality via an advanced option in the UI or API.
- RNN-based (LSTM and GRU—long short-term memory and gated recurrent unit) supports a maximum categorical limit of 1000 (to prevent OOM errors). High-cardinality features will be truncated beyond this.
- The training partition for the holdout row in the flexible backtesting configuration is not directly editable. The duration of the first backtest’s training partition is used as the duration for the training partition of the holdout.
- For Repository blueprints, selecting a best-case default feature list is available for ARIMA models only.
- Hierarchical modeling requires the data’s series to be aligned in time (specifically 95% of series must appear on 95% of the timestamps in the data).
- Hierarchical and series-scaled blueprints require the target to be non-negative.
- Series-scaled blueprints only support squared loss (no log link).
- Hierarchical and LSTM blueprints do not support projects that require sampling.
- Model-per-series blueprints (XGB, XGB Boost, ENET) support up to 50 series. They will not be advance tunable if number of series is more than 10.
- ARIMA per-series blueprints are limited to 15K rows per series (i.e., 150K rows for 10 series) and support up to 40 series. The blueprint runs in Autopilot when the number of series is less than 10. Due to a refit for every prediction, the series accuracy computation can take a long time.
- Clustered blueprints are not available for classification. Similarity-based clustering is very time-consuming and can take a long time to train and will use large amounts of memory (use the  default performance-based clustering for large datasets).
- Zero-inflated blueprints are enabled if the target’s minimum value is 0.
- Zero-inflated blueprints only support the “nonzero average baseline” feature list.
- Setting the target to do-not-derive still derives the simple naive target feature for regression projects.
- Hierarchical and zero-inflated models cannot be used when a target is set to do-not-derive because the feature derivation process does not generate the target derived features required for zero-inflated & hierarchical models.
- The group ID for cross-series features cannot have blank or missing values; they cannot mix numeric and non-numeric values, similar to the series ID constraints.
- Prediction Explanations are not available for XGBoost-based hierarchical and two-stage models.
- Series scaling blueprints may have poor accuracy when predicting new series.
- The Feature Association Matrix is not supported in multiseries projects.
- Timestamps can be irregularly spaced but cannot contain duplicate dates within a series.
- Time series datasets cannot contain dates past the year 2262.
- To ensure backtests have enough rows, in highly irregular datasets use the row-count instead of duration partitioning mode.
- VARMAX and VAR blueprints do not support log-transform/exponential modeling.
- ARIMA, VARMAX, and VAR blueprint predictions require history back to the end of the training data when making predictions.
- For non-forecasting time series models (those that allows predicting the current targetFW=[0, 0]):
- Loss families have changed for time series blenders, which may slightly change blending results. Specifically:
- Binary classification projects have somewhatdifferent options availablethan regression projects. Additionally, classification projects:
- Millisecond datasets:
- Row-based projects require a primary date column.
- Calendar event files:
- When running blueprints from the repository, theTime Series Informative Featureslist (the default selection if you do not override it) is not optimal. Preferably, select one of the “with differencing” or the “no differencing” feature lists.
- TheForecast Windowmust be 1000 forecast distances (FDs)/time steps or fewer for small datasets.
- You cannot modify R code for Prophet blueprints; also, they do not support calendar events and cannot use known in advance features.
- Only Accuracy Over Time, Stability, Forecasting Accuracy, and Series Insights plots are available for export; other time series plots are not exportable from the UI or available through the public API.
- Large datasets with many forecast distances are down-sampled after feature derivation to <25GB.
- Accuracy Over Timetraining computation is disabled if the dataset exceeds the configured threshold after creation of the modeling dataset. The default threshold is 5 million rows.
- Seasonal AUTOARIMA uses large amounts of memory for large seasonality and, due to Python 2.7 issues, could fail on large datasets.
- Seasonality is only detected automatically if the periodicity fits inside the feature derivation window.
- TensorFlow neural network blueprints (in the Repository) do not support text features or making predictions on new series not in the training data.
