Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Bias and Fairness resources

The tools of the Bias and Fairness feature test your models for bias. This allows you identify bias before (or after) models are deployed and then to take action before the model's decisions cause negative outcomes for your organization. See a more complete overview here.

The workflow for implementing bias and fairness is:

  • Select one or more protected features and pick a fairness metric.
  • Use insights to determine if models are biased with respect to the protected features.
  • Monitor production models for bias.

The tools available for each step of working with Bias and Fairness are described in the following sections. Fairness metrics and terminology are described in the Bias and Fairness reference.

Topic Describes...
Advanced options: fairness metrics Set fairness metrics prior to model building (or from the Leaderboard post-modeling).
Advanced options: mitigation Set mitigation techniques prior to model building (or from the Leaderboard post-modeling).
Model insights
Per-Class Bias Identify if a model is biased, and if so, how much and who it's biased towards or against.
Cross-Class Data Disparity Depict why a model is biased, and where in the training data it learned that bias from.
Cross-Class Accuracy Measure the model's accuracy for each class segment of the protected feature.
Bias vs Accuracy View the tradeoff between predictive accuracy and fairness.
Fairness monitoring Configure tests that allow models to recognize, in real-time, when protected features in the dataset fail to meet predefined fairness conditions.
Per-Class Bias Uses the fairness threshold and score of each class to determine if certain classes are experiencing bias in the model's predictive behavior.
Fairness over time View how the distribution of a protected feature's fairness scores have changed over time.
Bias and Fairness overview View a brief overview and definition of bias and fairness, with links to further reading.
Bias and Fairness reference Understand methods used to calculate fairness and to identify biases in the model's predictive behavior.

Updated May 12, 2022
Back to top