# Set up data drift monitoring

> Set up data drift monitoring - Configure data drift monitoring on a deployment's Data Drift Settings
> tab.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.553777+00:00` (UTC).

## Primary page

- [Set up data drift monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html): Full documentation for this topic (HTML).

## Sections on this page

- [Define data drift monitoring notifications](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html#define-data-drift-monitoring-notifications): In-page section heading.
- [Example of an excluded feature](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html#example-of-an-excluded-feature): In-page section heading.
- [Example of configuring the importance and drift thresholds](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html#example-of-configuring-the-importance-and-drift-thresholds): In-page section heading.
- [Example of starring a feature to assign high importance](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html#example-of-starring-a-feature-to-assign-high-importance): In-page section heading.
- [Example of setting a drift status rule](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html#example-of-setting-a-drift-status-rule): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [MLOps](https://docs.datarobot.com/en/docs/classic-ui/mlops/index.html): Linked from this page.
- [Deployment settings](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/index.html): Linked from this page.
- [Data Drift](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html): Linked from this page.
- [accuracy monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html): Linked from this page.
- [configure the conditions under which notifications are sent to them](https://docs.datarobot.com/en/docs/classic-ui/mlops/governance/deploy-notifications.html): Linked from this page.
- [DataRobot API](https://docs.datarobot.com/en/docs/api/reference/public-api/observability_drift.html#enumerated-values_12): Linked from this page.
- [retrieve a list of supported metrics](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/deployment.html#data-drift): Linked from this page.

## Documentation content

# Set up data drift monitoring

When deploying a model, there is a chance that the dataset used for training and validation differs from the prediction data. You can enable data drift monitoring on the Data Drift > Settings tab. DataRobot monitors both target and feature drift information and displays results on the [Data Drift](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html) tab.

> [!NOTE] How does DataRobot track drift?
> DataRobot tracks two types of drift:
> 
> Target drift
> : DataRobot stores statistics about predictions to monitor how the distribution and values of the target change over time. As a baseline for comparing target distributions, DataRobot uses the distribution of predictions on the holdout.
> Feature drift
> : DataRobot stores statistics about predictions to monitor how distributions and values of features change over time. The supported feature data types are numeric, categorical, and text. As a baseline for comparing distributions of features:
> For training datasets larger than 500MB, DataRobot uses the distribution of a random sample of the training data.
> For training datasets smaller than 500MB, DataRobot uses the distribution of 100% of the training data.

> [!NOTE] Availability information
> Data drift tracking is only available for deployments using deployment-aware prediction API routes (i.e., `https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions`).

On a deployment's Data Drift Settings page, you can configure the following settings:

| Field | Description |
| --- | --- |
| Data Drift |  |
| Enable feature drift tracking | Configures DataRobot to track feature drift in a deployment. Training data is required for feature drift tracking. |
| Enable target monitoring | Configures DataRobot to track target drift in a deployment. Target monitoring is required for accuracy monitoring. |
| Training data |  |
| Training data | Displays the dataset used as a training baseline while building a model. |
| Inference data |  |
| DataRobot is storing your predictions | Confirms DataRobot is recording and storing the results of any predictions made by this deployment. DataRobot stores a deployment's inference data when a deployment is created. It cannot be uploaded separately. |
| Inference data (external model) |  |
| DataRobot is recording the results of any predictions made against this deployment | Confirms DataRobot is recording and storing the results of any predictions made by the external model. |
| Drop file(s) here or choose file | Uploads a file with prediction history data to monitor data drift. |
| Definition |  |
| Set definition | Configures the drift and importance metric settings and threshold definitions for data drift monitoring. |

> [!NOTE] Note
> DataRobot monitors both target and feature drift information by default and displays results in the [Data Drift dashboard](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html). Use the Enable target monitoring and Enable feature drift tracking toggles to turn off tracking if, for example, you have sensitive data that should not be monitored in the deployment. The Enable target monitoring setting is also required to enable [accuracy monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html).

## Define data drift monitoring notifications

Drift assesses how the distribution of data changes across all features for a specified range. The thresholds you set determine the amount of drift you will allow before a notification is triggered.

> [!NOTE] Note
> Only deployment Owners can modify data drift monitoring settings; however, Users can [configure the conditions under which notifications are sent to them](https://docs.datarobot.com/en/docs/classic-ui/mlops/governance/deploy-notifications.html).Consumers cannot modify monitoring or notification settings.

Use the Definition section of the Data Drift > Settings tab to set thresholds for drift and importance:

- Drift is a measure of how new prediction data differs from the original data used to train the model.
- Importance allows you to separate the features you care most about from those that are less important.

For both drift and importance, you can visualize the thresholds and how they separate the features on the [Data Drift tab](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html). By default, the data drift status for deployments is marked as "Failing" ( [https://docs.datarobot.com/en/docs/images/icon-red.png](https://docs.datarobot.com/en/docs/images/icon-red.png)) when at least one high-importance feature exceeds the set drift metric threshold; it is marked as "At Risk" ( [https://docs.datarobot.com/en/docs/images/icon-yellow.png](https://docs.datarobot.com/en/docs/images/icon-yellow.png)) when no high-importance features, but at least one low-importance feature exceeds the threshold.

Deployment Owners can customize the rules used to calculate the drift status for each deployment. As a deployment Owner, you can:

- Define or override the list of high or low-importance features to monitor features that are important to you or put less emphasis on less important features.
- Exclude features expected to drift from drift status calculation and alerting so you do not get false alarms.
- Customize what "At Risk" and "Failing" drift statuses mean to personalize and tailor the drift status of each deployment to your needs.

To set up monitoring of drift status for a deployment:

1. On theData Drift Settingspage, in theDefinitionsection, configure the settings for monitoring data drift: ElementDescription1RangeAdjusts the time range of theReference period, which compares training data to prediction data. Select a time range from the dropdown menu.2Drift metricDataRobot only supports the Population Stability Index (PSI) metric. For more information, see the note onDrift metric supportbelow.3Importance metricDataRobot only supports the Permutation Importance metric. The importance metric measures the most impactful features in the training data.4Xexcluded featuresExcludes features (including the target) from drift status calculations. ClickXexcluded featuresto open a dialog box where you can enter the names of features to set asDrift exclusions. Excluded features do not affect drift status for the deployment but still display on the Feature Drift vs. Feature Importance chart. See anexample.5Xstarred featuresSets features to be treated as high importance even if they were initially assigned low importance. ClickXstarred featuresto open a dialog box where you can enter the names of features to set asHigh-importance stars. Once added, these features are assigned high importance. They ignore the importance thresholds, but still display on the Feature Drift vs. Feature Importance chart. See anexample.6Drift thresholdConfigures the thresholds of the drift metric. When drift thresholds are changed, theFeature Drift vs. Feature Importance chartupdates to reflect the changes.7Importance thresholdConfigures the thresholds of the importance metric. The importance metric measures the most impactful features in the training data. When importance thresholds are changed, theFeature Drift vs. Feature Importance chartupdates to reflect the changes. See anexample.8"At Risk" / "Failing" thresholdsConfigures the values that trigger drift statuses for "At Risk" () and "Failing" (). See anexample. NoteChanges to thresholds affect the periods of time in which predictions are made across the entire history of a deployment. These updated thresholds are reflected in the performance monitoring visualizations on theData Drifttab.
2. After updating the data drift monitoring settings, clickSave.

### Example of an excluded feature

In the example below, the excluded feature, which appears as a gray circle, would normally change the drift status to "Failing" ( [https://docs.datarobot.com/en/docs/images/icon-red.png](https://docs.datarobot.com/en/docs/images/icon-red.png)). Because it is excluded, the status remains as Passing.

### Example of configuring the importance and drift thresholds

In the example below, the chart has adjusted the importance and drift thresholds (indicated by the arrows), resulting in more features "At Risk" and "Failing" than the chart above.

### Example of starring a feature to assign high importance

In the example below, the starred feature, which appears as a white circle, would normally cause drift status to be "At Risk" due to its initially low importance. However, since it is assigned high importance, the feature will change the drift status to "Failing" ( [https://docs.datarobot.com/en/docs/images/icon-red.png](https://docs.datarobot.com/en/docs/images/icon-red.png)).

### Example of setting a drift status rule

The following example configures the rule for a deployment to mark its drift status as "At Risk" if one of the following is true:

- The number of low-importance features above the drift threshold is greater than 1.
- The number of high-importance features above the drift threshold is greater than 3.
