Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Configure predictions settings

On a deployment's Settings > Predictions tab, you can view details about your deployment's inference (also known as scoring) data—the data containing prediction requests and results from the model.

On the Predictions Settings page, you can access the following information:

Field Description
Prediction environment Displays the environment where predictions are generated. Prediction environments allow you to establish access controls and approval workflows.
Prediction timestamp Displays the method used for time-stamping prediction rows. Use the time of the prediction request or use a date/time feature (e.g., forecast date) provided with prediction data to determine the timestamp. Forecast date time-stamping is set automatically for time series deployments. It allows for a common time axis to be used between training data and the basis of data drift and accuracy statistics. This setting cannot be changed after the deployment is created and predictions are made.
Batch monitoring Enables viewing monitoring statistics organized by batch, instead of by time, with batch-enabled deployments.

Set prediction autoscaling settings for DataRobot serverless deployments

To configure on-demand predictions on this environment, scroll down to Autoscaling and set the following options:

Prediction intervals in DataRobot serverless prediction environments

In a DataRobot serverless prediction environment, to make predictions with time-series prediction intervals included, you must include pre-computed prediction intervals when registering the model package. If you don't pre-compute prediction intervals, the deployment resulting from the registered model doesn't support enabling prediction intervals.

Field Description
Minimum compute instances (Premium feature) Set the minimum compute instances for the model deployment. If your organization doesn't have access to "always-on" predictions, this setting is set to 0 and isn't configurable. With the minimum compute instances set to 0, the inference server will be stopped after an inactivity period of 7 days. The minimum and maximum compute instances depend on the model type. For more information, see the compute instance configurations note.
Maximum compute instances Set the maximum compute instances for the model deployment to a value above the current configured minimum. To limit compute resource usage, set the maximum value equal to the minimum. The minimum and maximum compute instances depend on the model type. For more information, see the compute instance configurations note.

Premium feature: Always-on predictions

Always-on predictions are a premium feature. Contact your DataRobot representative or administrator for information on enabling the feature.

Change secondary datasets for Feature Discovery

Feature Discovery identifies and generates new features from multiple datasets so that you no longer need to perform manual feature engineering to consolidate multiple datasets into one. This process is based on relationships between datasets and the features within those datasets. DataRobot provides an intuitive relationship editor that allows you to build and visualize these relationships. Feature Discovery engine analyzes the graphs and the included datasets to determine a feature engineering “recipe” and, from that recipe, generates secondary features for training and predictions. While configuring the deployment settings, you can change the selected secondary dataset configuration.

Setting Description
Secondary datasets configurations Previews the dataset configuration or provides an option to change it. By default, DataRobot makes predictions using the secondary datasets configuration defined when starting the project. Click Change to select an alternative configuration before uploading a new primary dataset.

Updated November 1, 2024