Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Configure a deployment

Regardless of where you create a new deployment (the Leaderboard, the Model Registry, or the deployment inventory) or the type of artifact (DataRobot model, custom inference model, or remote mode), you are directed to the deployment information page where you can customize the deployment.

The deployment information page outlines the capabilities of your current deployment based on the data provided, for example, training data, prediction data, or actuals. It populates fields for you to provide details about the training data, inference data, model, and your outcome data.

Standard options and information

When you initiate model deployment, the Deployments tab opens to the Model Information and the Prediction History and Service Health options:

Model Information

The Model Information section provides information about the model being used to make predictions for your deployment. DataRobot uses the files and information from the deployment to complete these fields, so they aren't editable.

Field Description
Model name The name of your model.
Prediction type The type of prediction the model is making. For example: Regression, Classification, Multiclass, Anomaly Detection, Clustering, etc.
Threshold The prediction threshold for binary classification models. Records above the threshold are assigned the positve class lable and records below the threshold are assigned the negative class label. This field isn't available for Regression or Multiclass models.
Target The dataset column name the model will predict on.
Positive / Negative classes The positive and negative class values for binary classification models. This field isn't visible for Regression or Multiclass models.
Model Package Id The id of the Model Package in the Model Registry.


If you are part of an organization with deployment limits, the Deployment billing section notifies you of the number of deployments your organization is using against the deployment limit and the deployment cost if your organization has exceeded the limit.

Prediction History and Service Health

The Prediction History and Service Health section provides details about your deployment's inference (also known as scoring) data—the data that contains prediction requests and results from the model.

Setting Description
Configure prediction environment Environment where predictions are generated. Prediction environments allow you to establish access controls and approval workflows.
Configure prediction timestamp Determines the method used to time-stamp prediction rows for Data Drift and Accuracy monitoring.
  • Use time of prediction request: Use the time you submitted the prediction request to determine the timestamp.
  • Use value from date/time feature: Use the date/time provided as a feature with the prediction data (e.g., forecast date) to determine the timestamp. Forecast date time-stamping is set automatically for time series deployments. It allows for a common time axis to be used between training data and the basis of data drift and accuracy statistics.
This setting doesn't apply to the Service Health prediction timestamp. The Service Health tab always uses the time the prediction server received the prediction request. For more information, see Time of Prediction below.
This setting cannot be changed after the deployment is created and predictions are made.
Set deployment importance Determines the importance level of a deployment. These levels—Critical, High, Moderate, and Low—determine how a deployment is handled during the approval process. Importance represents an aggregate of factors relevant to your organization such as the prediction volume of the deployment, level of exposure, potential financial impact, and more. When a deployment is assigned an importance of Moderate or above, the Reviewers notification appears (under Model Information) to inform you that DataRobot will automatically notify users assigned as reviewers whenever the deployment requires review.

Time of Prediction

The Time of Prediction value differs between the Data Drift and Accuracy tabs and the Service Health tab:

  • On the Service Health tab, the "time of prediction request" is always the time the prediction server received the prediction request. This method of prediction request tracking accurately represents the prediction service's health for diagnostic purposes.

  • On the Data Drift and Accuracy tabs, the "time of prediction request" is, by default, the time you submitted the prediction request, which you can override with the prediction timestamp in the Prediction History settings.

Advanced options

If you click Show advanced options, you can configure the following deployment settings:

Data Drift

When deploying a model, there is a chance that the dataset used for training and validation differs from the prediction data. To enable drift tracking you can configure the following settings:

Setting Description
Enable feature drift tracking Configures DataRobot to track feature drift in a deployment. Training data is required for feature drift tracking.
Enable target monitoring Configures DataRobot to track target drift in a deployment. Actuals are required for target monitoring, and target monitoring is required for accuracy monitoring.
Training data Required to enable feature drift tracking in a deployment.
How does DataRobot track drift?

For data drift, DataRobot tracks:

  • Target drift: DataRobot stores statistics about predictions to monitor how the distribution and values of the target change over time. As a baseline for comparing target distributions, DataRobot uses the distribution of predictions on the holdout.

  • Feature drift: DataRobot stores statistics about predictions to monitor how distributions and values of features change over time. As a baseline for comparing distributions of features:

    • For training datasets larger than 500 MB, DataRobot uses the distribution of a random sample of the training data.

    • For training datasets smaller than 500 MB, DataRobot uses the distribution of 100% of the training data.

DataRobot monitors both target and feature drift information by default and displays results in the Data Drift dashboard. Use the Enable target monitoring and Enable feature drift tracking toggles to turn off tracking if, for example, you have sensitive data that should not be monitored in the deployment.

You can customize how data drift is monitored. See the data drift page for more information on customizing data drift status for deployments.


Data drift tracking is only available for deployments using deployment-aware prediction API routes (i.e.,<deploymentId>/predictions).


Setting Description
Association ID The column name that contains the association ID in the prediction dataset for your model. Association IDs are required for setting up accuracy tracking in a deployment. The association ID functions as an identifier for your prediction dataset so you can later match up outcome data (also called "actuals") with those predictions. Note that the Create deployment button is inactive until you enter an association ID or turn off this toggle.
Require association ID in prediction requests Requires your prediction dataset to have a column name that matches the name you entered in the Association ID field. When enabled, you will get an error if the column is missing.
Enable automatic actuals feedback for time series models For time series deployments that have indicated an association ID. Enables the automatic submission of actuals, so that you do not need to submit them manually via the UI or API. Once enabled, actuals can be extracted from the data used to generate predictions. As each prediction request is sent, DataRobot can extract an actual value for a given date. This is because when you send prediction rows to forecast, historical data is included. This historical data serves as the actual values for the previous prediction request.

Challenger Analysis

DataRobot can securely store prediction request data at the row level for deployments (not supported for external model deployments). This setting must be enabled for any deployment using the Challengers tab. In addition to enabling challenger analysis, access to stored prediction request rows enables you to thoroughly audit the predictions and use that data to troubleshoot operational issues. For instance, you can examine the data to understand an anomalous prediction result or why a dataset was malformed.


Contact your DataRobot representative to learn more about data security, privacy, and retention measures or to discuss prediction auditing needs.

Setting Description
Enable prediction rows storage for challenger analysis Enables the use of challenger models, which allow you to compare models post-deployment and replace the champion model if necessary. Once enabled, prediction requests made for the deployment are collected by DataRobot. Prediction explanations are not stored.


Prediction requests are only collected if the prediction data is in a valid data format interpretable by DataRobot, such as CSV or JSON. Failed prediction requests with a valid data format are also collected (i.e., missing input features).

Segmented Analysis

Segmented Analysis identifies operational issues with training and prediction data requests for a deployment. DataRobot enables the drill-down analysis of data drift and accuracy statistics by filtering them into unique segment attributes and values.

Setting Description
Track attributes for segmented analysis of training data and predictions Enables DataRobot to monitor deployment predictions by segments; for example, by categorical features. This setting requires training data and is required to enable Fairness monitoring.


The Fairness section allows you to define Bias and Fairness settings for your deployment to identify any biases in the model's predictive behavior. If fairness settings are defined prior to deploying a model, the fields are automatically populated. For additional information, see the section on defining fairness tests.

Setting Description
Protected features The dataset columns to measure fairness of model predictions against; must be categorical.
Primary fairness metric The statistical measure of parity constraints used to assess fairness.
Favorable target outcome The outcome value perceived as favorable for the protected class relative to the target.
Fairness threshold The fairness threshold helps measure if a model performs within appropriate fairness bounds for each protected class.

Deploy the model

After you add the available data and your model is fully defined, click Deploy model at the top of the screen.


If the Deploy model button is inactive, be sure to either specify an association ID (required for enabling accuracy monitoring) or toggle off Require association ID in prediction requests.

The Creating deployment message appears, indicating that DataRobot is creating the deployment. After the deployment is created, the Overview tab opens.

Click the arrow to the left of the deployment name to return to the deployment inventory.

Updated April 10, 2023
Back to top