Fairness¶
After you configure a deployment's fairness settings, you can use the Monitoring > Fairness tab to configure tests that allow models to monitor and recognize, in real time, when protected features in the dataset fail to meet predefined fairness conditions.
Investigate bias¶
The Fairness tab helps you understand why a deployment is failing fairness tests and which protected features are below the predefined fairness threshold. It provides two interactive and exportable visualizations that help identify which feature is failing fairness testing and why.
Chart | Description | |
---|---|---|
1 | Aggregate Fairness / Per-Class Bias | Uses the fairness threshold and fairness score of each class to determine if certain classes are experiencing bias in the model's predictive behavior. |
2 | Fairness Over Time | Illustrates how the distribution of a protected feature's fairness scores have changed over time. |
View per-class bias¶
The Aggregate Fairness chart helps to identify if a model is biased, and if so, how much and who it's biased towards or against. You can click a feature to view the per-class bias. For more information, see the documentation on per-class bias. If a feature is identified as Below Threshold, the feature does not meet the predefined fairness conditions. Click the Below Threshold feature on the left to display the per-class fairness scores for each segmented attribute and better understand where bias exists within the feature.
Hover over a point on the chart to view its details:
View fairness over time¶
After configuring fairness criteria and making predictions with fairness monitoring enabled, you can view how fairness scores of the protected feature or feature values have changed over time for a deployment. The X-axis measures the range of time that predictions have been made for the deployment, and the Y-axis measures the fairness score.
Hover over a point on the chart to view its details:
You can also hide specific features or feature values from the chart by unchecking the box next to its name:
Considerations¶
- Bias and Fairness monitoring is only available for binary classification models and deployments.
- To upload actuals for predictions, an association ID is required. It is also used to calculate True Positive & Negative Rate Parity and Positive & Negative Predictive Value Parity.