Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Bias and Fairness overview

In DataRobot, bias represents the difference between a model's predictions for different populations (or groups) while fairness is the measure of the model's bias. More specifically, it provides methods to calculate fairness for a binary classification model and to identify any biases in the model's predictive behavior.

Fairness metrics in modeling describe the ways in which a model can perform differently for distinct groups within data. Those groups, when they designate groups of people, might be identified by protected or sensitive characteristics, such as race, gender, age, and veteran status.

The largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Bias might also be a product not of the historical process itself but of data collection or sampling methods misrepresenting the ground truth.

See the index for links to the settings and tools available on DataRobot to enable bias mitigation.

Additional resources


Updated March 8, 2022