# Set up fairness monitoring

> Set up fairness monitoring - Configure fairness monitoring on a deployment's Fairness Settings tab.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.554159+00:00` (UTC).

## Primary page

- [Set up fairness monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/fairness-settings.html): Full documentation for this topic (HTML).

## Sections on this page

- [Select a fairness metric](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/fairness-settings.html#select-a-fairness-metric): In-page section heading.
- [Define fairness monitoring notifications](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/fairness-settings.html#define-fairness-monitoring-notifications): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [MLOps](https://docs.datarobot.com/en/docs/classic-ui/mlops/index.html): Linked from this page.
- [Deployment settings](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/index.html): Linked from this page.
- [Bias and Fairness](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html): Linked from this page.
- [target monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html): Linked from this page.
- [Bias and Fairness reference page](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/bias-ref.html): Linked from this page.
- [Track attributes for segmented analysis of training data and predictions](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-segment.html): Linked from this page.
- [Protected features](https://docs.datarobot.com/en/docs/reference/glossary/index.html#protected-feature): Linked from this page.
- [association ID](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html#select-an-association-id): Linked from this page.
- [Fairness](https://docs.datarobot.com/en/docs/classic-ui/mlops/governance/mlops-fairness.html): Linked from this page.
- [configure the conditions under which notifications are sent to them](https://docs.datarobot.com/en/docs/classic-ui/mlops/governance/deploy-notifications.html): Linked from this page.

## Documentation content

# Set up fairness monitoring

On a deployment's Fairness > Settings tab, you can define [Bias and Fairness](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html) settings for your deployment to identify any biases in a binary classification model's predictive behavior. If fairness settings are defined prior to deploying a model, the fields are automatically populated. For additional information, see the section on [defining fairness tests](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#configure-metrics-and-mitigation).

> [!NOTE] Note
> To configure fairness settings, you must enable target monitoring for the deployment. Target monitoring allows DataRobot to monitor how the values and distributions of the target change over time by storing prediction statistics. If target monitoring is turned off, a message displays on the Fairness tab to remind you to enable [target monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/data-drift-settings.html).

Configuring fairness criteria and notifications can help you identify the root cause of bias in production models. On the Fairness tab for individual models, DataRobot calculates per-class bias and fairness over time for each protected feature, allowing you to understand why a deployed model failed the predefined acceptable bias criteria. For information on fairness metrics and terminology, see the [Bias and Fairness reference page](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/bias-ref.html).

To measure the fairness of production models, you must configure bias and fairness testing in the Fairness > Settings tab of a deployed model. If bias and fairness testing was configured for the model prior to deployment, the fields are automatically populated.

On a deployment's Fairness Settings page, you can configure the following settings:

| Field | Description |
| --- | --- |
| Segmented Analysis |  |
| Track attributes for segmented analysis of training data and predictions | Enables DataRobot to monitor deployment predictions by segments, for example by categorical features. |
| Fairness |  |
| Protected features | Selects each protected feature's dataset column to measure fairness of model predictions against; these features must be categorical. |
| Primary fairness metric | Selects the statistical measure of parity constraints used to assess fairness. |
| Favorable target outcome | Selects the outcome value perceived as favorable for the protected class relative to the target. |
| Fairness threshold | Selects the fairness threshold to measure if a model performs within appropriate fairness bounds for each protected class. |
| Association ID |  |
| Association ID | Defines the name of the column that contains the association ID in the prediction dataset for your model. An association ID is required to calculate two of the Primary fairness metric options: True Favorable Rate & True Unfavorable Rate Parity and Favorable Predictive & Unfavorable Predictive Value Parity. The association ID functions as an identifier for your prediction dataset so you can later match up outcome data (also called "actuals") with those predictions. |
| Require association ID in prediction requests | Requires your prediction dataset to have a column name that matches the name you entered in the Association ID field. When enabled, you will get an error if the column is missing. This cannot be enabled with Enable automatic association ID generation for prediction rows. |
| Enable automatic association ID generation for prediction rows | With an association ID column name defined, allows DataRobot to automatically populate the association ID values. This cannot be enabled with Require association ID in prediction requests. |
| Definition |  |
| Set definition | Configures the number of protected classes below the fairness threshold required to trigger monitoring notifications. |

## Select a fairness metric

DataRobot supports the following fairness metrics in MLOps:

- Equal Parity
- Proportional Parity
- Prediction Balance
- True FavorableandTrue UnfavorableRate Parity (True Positive Rate Parity and True Negative Rate Parity)
- Favorable PredictiveandUnfavorable PredictiveValue Parity (Positive Predictive Value Parity and Negative Predictive Value Parity)

If you are unsure of the appropriate fairness metric for your deployment, click [help me choose](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#select-a-metric).

> [!NOTE] Note
> To calculate True Favorable Rate & True Unfavorable Rate Parity and Favorable Predictive & Unfavorable Predictive Value Parity, the deployment must provide an [association ID](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html#select-an-association-id).

## Define fairness monitoring notifications

Configure notifications to alert you when a production model is at risk of or fails to meet predefined fairness criteria. You can visualize fairness status on the [Fairness](https://docs.datarobot.com/en/docs/classic-ui/mlops/governance/mlops-fairness.html) tab. Fairness monitoring uses a primary fairness metric and two thresholds—protected features considered to be "At Risk" and "Failing"—to monitor fairness. If not specified, DataRobot uses the default thresholds.

> [!NOTE] Note
> To access the settings in the Definition & Notifications section, configure and save the fairness settings. Only deployment Owners can modify fairness monitoring settings; however, Users can [configure the conditions under which notifications are sent to them](https://docs.datarobot.com/en/docs/classic-ui/mlops/governance/deploy-notifications.html).Consumers cannot modify monitoring or notification settings.

To customize the rules used to calculate the fairness status for each deployment:

1. On theFairness Settingspage, in theDefinitionsection, clickSet definitionand configure the threshold settings for monitoring fairness: ThresholdDescriptionAt RiskDefines the number of protected features below the bias threshold that, when exceeded, classifies the deployment as "At Risk" and triggers notifications. The threshold forAt Riskshould be lower than the threshold forFailing.Default value:1FailingDefines the number of protected features below the bias threshold that, when exceeded, classifies the deployment as "Failing" and triggers notifications. The threshold forFailingshould be higher than the threshold forAt Risk.Default value:2 NoteChanges to thresholds affect the periods in which predictions are made across the entire history of a deployment. These updated thresholds are reflected in the performance monitoring visualizations on theFairnesstab.
2. After updating the fairness monitoring settings, clickSave.
