Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Fairness monitoring

Availability information

The Fairness tab is only available for DataRobot MLOps users. Contact your DataRobot representative for more information about enabling this feature.

Fairness monitoring is useful for configuring tests that allow models to be capable of recognizing, in real-time, when protected features in the dataset fail to meet predefined fairness conditions.

Enable fairness monitoring on the Deployment > Settings > Data page.

Note

If target monitoring is turned off, a message displays on the Fairness tab to remind you to enable target monitoring.

Configuring fairness criteria and notifications helps identify the root cause of bias in production models. On the Fairness tab for individual models, DataRobot calculates per-class bias and fairness over time for each protected feature, allowing you to understand why a deployed model failed the predefined acceptable bias criteria.

For information on fairness metrics and terminology, see the Bias and Fairness reference page.

Define fairness criteria

To measure the fairness of production models, you must configure bias and fairness testing in the Settings > Data tab of a deployed model. If bias and fairness testing was configured for the model prior to deployment, the fields are automatically populated.

DataRobot supports the following fairness metrics in MLOps:

If you are unsure of the appropriate fairness metric for your deployment, click help me choose.

Note

To calculate True Favorable Rate & True Unfavorable Rate Parity and Favorable Predictive & Unfavorable Predictive Value Parity, the deployment must provide an association ID.

After defining the fairness settings, click the Monitoring tab to configure fairness monitoring and notifications.

Monitor fairness

When viewing the Deployment inventory with the Governance lens, the Fairness column provides an at-a-glance indication of how each deployment is performing based on the fairness tests set up in the Settings > Data tab.

To view more detailed information for an individual model or investigate why a model is failing fairness tests, click on a deployment in the inventory list and navigate to the Fairness tab.

Note

To receive email notifications on fairness status, configure notifications, schedule monitoring, and configure fairness monitoring settings.

Investigate bias

The Fairness tab helps you understand why a deployment is failing fairness tests and which protected features are below the predefined fairness threshold. It provides two interactive and exportable visualizations that help identify which feature is failing fairness testing and why.

Chart Description
Per-Class Bias chart Uses the fairness threshold and fairness score of each class to determine if certain classes are experiencing bias in the model's predictive behavior.
Fairness Over Time chart Illustrates how the distribution of a protected feature's fairness scores have changed over time.

If a feature is marked as below threshold, the feature does not meet the predefined fairness conditions.

Select the feature on the left to display fairness scores for each segmented attribute and better understand where bias exists within the feature.

To further modify the display, see the documentation for the version selector.

View per-class bias

The Per-Class Bias chart helps to identify if a model is biased, and if so, how much and who it's biased towards or against. For more information, see the existing documentation on per-class bias.

Hover over a point on the chart to view its details:

View fairness over time

After configuring fairness criteria and making predictions with fairness monitoring enabled, you can view how fairness scores of the protected feature or feature values have changed over time for a deployment. The X-axis measures the range of time that predictions have been made for the deployment, and the Y-axis measures the fairness score.

Hover over a point on the chart to view its details:

You can also hide specific features or feature values from the chart by unchecking the box next to its name:

The controls work the same as those available on the Data Drift tab.


Updated March 8, 2022
Back to top