Bias and Fairness¶
The Bias and Fairness tabs identify if a model is biased and why the model is learning bias from the training data. The following sections provide additional information on using the tabs:
Leaderboard tab | Description | Source |
---|---|---|
Cross-Class Accuracy | Measure the model's accuracy for each class segment of the protected feature. | Validation data |
Cross-Class Data Disparity | Depict why a model is biased, and where in the training data it learned that bias from. | Validation data |
Per-Class Bias | Identify if a model is biased, and if so, how much and who it's biased towards or against. | Validation data |
Settings | Configure fairness tests from the Leaderboard. | N/A |
If you did not configure Bias and Fairness prior to model building, you can configure fairness tests for Leaderboard models in Bias and Fairness > Settings.
See the Bias and Fairness reference for a description of the methods used to calculate fairness for a machine learning model and to identify any biases from the model's predictive behavior.
Bias and Fairness considerations¶
Consider the following when using the Bias and Fairness tab:
- Bias and fairness testing is only available for binary classification projects.
- Protected features must be categorical features in the dataset.