Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Bias and Fairness

Bias and Fairness testing provides methods to calculate fairness for a binary classification model and to identify any biases in the model's predictive behavior. In DataRobot, bias represents the difference between a model's predictions for different populations (or groups) while fairness is the measure of the model's bias.

Select protected features in the dataset and choose an appropriate fairness metric either before model building or from the Leaderboard. Once models are built, Bias and Fairness insights help identify bias in a model and visualize the root-cause analysis, explaining why the model is learning bias from the training data and from where.

Define fairness tests

Once you select a target, click Show advanced options and select the Bias and Fairness tab.

To configure Bias and Fairness, set the values that define your use case. For additional detail, refer to the bias and fairness reference for common terms and metric definitions.

  1. Identify up to 10 Protected Features in the dataset. Protected features must be categorical. The model's fairness is calculated against the protected features selected from the dataset.

  2. Define the Favorable Target Outcome, i.e., the outcome perceived as favorable for the protected class relative to the target. In the below example, the target is "salary" so annual salaries are listed under Favourable Target Outcome, and a favourable outcome is earning greater than 50K.

  3. Choose the Primary Fairness Metric most appropriate for your use case from the five options below. If you are unsure, use the Help me choose option.

    Metric Description
    Proportional Parity For each protected class, what is the probability of receiving favorable predictions from the model? This metric (also known as "Statistical Parity" or "Demographic Parity") is based on equal representation of the model's target across protected classes.
    Equal Parity For each protected class, what is the total number of records with favorable predictions from the model? This metric is based on equal representation of the model's target across protected classes.
    Prediction Balance (Favorable Class Balance and Unfavorable Class Balance) For all actuals that were favorable/unfavorable outcomes, what is the average predicted probability for each protected class? This metric is based on equal representation of the model's average raw scores across each protected class and is part of the set of Prediction Balance fairness metrics.
    True Favorable Rate Parity and True Unfavorable Rate Parity For each protected class, what is the probability of the model predicting the favorable/unfavorable outcome for all actuals of the favorable/unfavorable outcome? This metric is based on equal error.
    Favorable Predictive Value Parity and Unfavorable Predictive Value Parity What is the probability of the model being correct (i.e., the actual results being favorable/unfavorable)? This metric (also known as "Positive Predictive Value Parity") is based on equal error.

    The fairness metric serves as the foundation for the calculated fairness score; a numerical computation of the model's fairness against the protected class.

  4. Set a Fairness Threshold for the project. The threshold serves as a benchmark for the model's fairness score. That is, it measures if a model performs within appropriate fairness bounds for each protected class. It does not affect the fairness score or performance of any protected class. (See the reference section for more information.)

Select a metric

If you are unsure of the best metric for your model, click Help me choose.

DataRobot presents a questionnaire where each question is determined by your answer to the previous one. Once completed, DataRobot recommends a metric based on your answers.

Because bias and fairness are ethically complex, DataRobot's questions cannot capture every detail of each use case. Use the recommended metric as a guidepost; it is not necessarily the correct (or only) metric appropriate for your use case. Select different metrics to observe how answering the questions differently would affect the recommendation.

Click Select to add the highlighted option to the Primary Fairness Metric field.

Define tests from the Leaderboard

If you did not configure Bias and Fairness prior to model building, you can configure fairness tests from the Leaderboard.

  1. Select a model and click the Bias and Fairness tab.

  2. Follow the instructions on configuring bias and fairness in advanced options.

  3. Click Save. DataRobot then configures fairness testing for all models in your project based on these settings.

Updated October 26, 2021
Back to top