# Bias and Fairness

> Bias and Fairness - Describes the Bias and Fairness advanced option tab, where you can set protected
> features, choose a fairness metric, and configure bias mitigation techniques.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.596310+00:00` (UTC).

## Primary page

- [Bias and Fairness](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html): Full documentation for this topic (HTML).

## Sections on this page

- [Configure metrics and mitigation pre-Autopilot](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#configure-metrics-and-mitigation-pre-autopilot): In-page section heading.
- [Set fairness metrics](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#set-fairness-metrics): In-page section heading.
- [Set mitigation techniques](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#set-mitigation-techniques): In-page section heading.
- [Configure metrics and mitigation post-Autopilot](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#configure-metrics-and-mitigation-post-autopilot): In-page section heading.
- [Retrain with fairness tests](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#retrain-with-fairness-tests): In-page section heading.
- [Retrain with mitigation](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#retrain-with-mitigation): In-page section heading.
- [Single-model retraining](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#single-model-retraining): In-page section heading.
- [Multiple-model retraining](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#multiple-model-retraining): In-page section heading.
- [Identify mitigated models](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#identify-mitigated-models): In-page section heading.
- [Compare models](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#compare-models): In-page section heading.
- [Mitigation eligibility](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#mitigation-eligibility): In-page section heading.
- [Bias mitigation considerations](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#bias-mitigation-considerations): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [Modeling](https://docs.datarobot.com/en/docs/classic-ui/modeling/index.html): Linked from this page.
- [Build models](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/index.html): Linked from this page.
- [Advanced options](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/index.html): Linked from this page.
- [Bias and Fairness insights](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/bias/index.html): Linked from this page.
- [protected feature](https://docs.datarobot.com/en/docs/reference/glossary/index.html#protected-feature): Linked from this page.
- [Bias and Fairness](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/bias-resources.html): Linked from this page.
- [bias and fairness reference](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/bias-ref.html): Linked from this page.
- [Quick Autopilot mode](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/model-ref.html#quick-autopilot): Linked from this page.
- [blueprin](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/model-blueprint.html): Linked from this page.
- [Bias vs Accuracy](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/other/bias-tab.html): Linked from this page.
- [Bias and Fairness > Per-Class Bias](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/bias/per-class.html): Linked from this page.
- [External Predictions](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html): Linked from this page.
- [Smart Downsampling](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/smart-ds.html): Linked from this page.
- [SHAP](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html): Linked from this page.

## Documentation content

# Bias and Fairness

Bias and Fairness testing provides methods to calculate fairness for a binary classification model and attempt to identify any biases in the model's predictive behavior. In DataRobot, bias represents the difference between a model's predictions for different populations (or groups) while fairness is the measure of the model's bias.

Select protected features in the dataset and choose fairness metrics and mitigation techniques either [before model building](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#configure-metrics-and-mitigation-pre-autopilot) or [from the Leaderboard](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#configure-metrics-and-mitigation-post-autopilot) once models are built.[Bias and Fairness insights](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/bias/index.html) help identify bias in a model and visualize the root-cause analysis, explaining why the model is learning bias from the training data and from where.

Bias mitigation in DataRobot is a technique for reducing (“mitigating”) model bias for an identified [protected feature](https://docs.datarobot.com/en/docs/reference/glossary/index.html#protected-feature) —by producing predictions with higher scores on a selected fairness metric for one or more groups (classes) in a protected feature. It is available for binary classification projects, and typically results in a small reduction in accuracy in exchange for greater fairness.

See the [Bias and Fairness](https://docs.datarobot.com/en/docs/classic-ui/modeling/special-workflows/bias-resources.html) resource page for more complete information on the generally available bias and fairness testing and mitigation capabilities.

## Configure metrics and mitigation pre-Autopilot

Once you select a target, click Show advanced options and select the Bias and Fairness tab. From the tab you can set [fairness metrics](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#set-fairness-metrics) and [mitigation techniques](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#set-mitigation-techniques).

### Set fairness metrics

To configure Bias and Fairness, set the values that define your use case. For additional detail, refer to the [bias and fairness reference](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/bias-ref.html) for common terms and metric definitions.

1. Identify up to 10Protected Featuresin the dataset. Protected features must be categorical. The model's fairness is calculated against the protected features selected from the dataset.
2. Define theFavorable Target Outcome, i.e., the outcome perceived as favorable for the protected class relative to the target. In the below example, the target is "salary" so annual salaries are listed under Favorable Target Outcome, and a favorable outcome is earning greater than 50K.
3. Choose thePrimary Fairness Metricmost appropriate for your use case from the five options below. Help me chooseIf you are unsure of the best metric for your model, clickHelp me choose.DataRobot presents a questionnaire where each question is determined by your answer to the previous one. Once completed, DataRobot recommends a metric based on your answers.Because bias and fairness are ethically complex, DataRobot's questions cannot capture every detail of each use case. Use the recommended metric as a guidepost; it is not necessarily the correct (or only) metric appropriate for your use case. Select different metrics to observe how answering the questions differently would affect the recommendation.ClickSelectto add the highlighted option to thePrimary Fairness Metricfield. MetricDescriptionProportional ParityFor each protected class, what is the probability of receiving favorable predictions from the model?  This metric (also known as "Statistical Parity" or "Demographic Parity") is based on equal representation of the model's target across protected classes.Equal ParityFor each protected class, what is the total number of records with favorable predictions from the model?  This metric is based on equal representation of the model's target across protected classes.Prediction Balance (Favorable Class BalanceandUnfavorable Class Balance)For all actuals that were favorable/unfavorable outcomes, what is the average predicted probability  for each protected class? This metric is based on equal representation of the model's average raw scores across each protected class and is part of the set of Prediction Balance fairness metrics.True Favorable Rate ParityandTrue Unfavorable Rate ParityFor each protected class, what is the probability of the model predicting the  favorable/unfavorable outcome for all actuals of the favorable/unfavorable outcome? This metric is based on equal error.Favorable Predictive Value ParityandUnfavorable Predictive Value ParityWhat is the probability of the model being correct (i.e., the actual results being favorable/unfavorable)?  This metric (also known as "Positive Predictive Value Parity") is based on equal error. The fairness metric serves as the foundation for the calculated fairness score; a numerical computation of the model's fairness against the protected class.
4. Set aFairness Thresholdfor the project. The threshold serves as a benchmark for the model's fairness score. That is, it measures if a model performs within appropriate fairness bounds for each protected class. It does not affect the fairness score or performance of any protected class. (See thereference sectionfor more information.)

### Set mitigation techniques

Select a bias mitigation technique for DataRobot to apply automatically. DataRobot uses the selected technique to automatically attempt bias mitigation for the top three full or Comprehensive Autopilot Leaderboard models (based on accuracy). You can also initiate bias mitigation [manually](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#retrain-with-mitigation) after Autopilot completes. (If you used [Quick Autopilot mode](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/model-ref.html#quick-autopilot), for example, manual mode allows you to apply mitigation to selected models). With either method, once applied, you can compare [mitigated versus unmitigated models](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#compare-models).

The table below summarizes the fields:

| Field | Description |
| --- | --- |
| Bias mitigation feature | Lists the protected feature(s); select one from which to reduce the model's bias towards. |
| Include as a predictor variable | Sets whether to include the mitigation feature as an input to model training. |
| Bias mitigation technique | Sets the mitigation technique and the point in model processing when mitigation is applied. |

The steps below provide greater detail for each field:

1. Select a feature from theBias mitigation featuredropdown, which lists the feature(s) that you set as protected in theProtected featuresfield for general Bias and Fairness settings. This is the feature you would like to reduce the model’s bias towards.
2. Once the mitigation feature is set, DataRobot computes data quality for the feature. When the check is successful, the option to include the protected feature as a predictor variable becomes available. Check the box to use the feature to attempt mitigation and to include the mitigation feature as an input into model training. Leave it unchecked to use the feature for mitigation only, not as a training input. This can be useful when you are legally prohibited from, or don't want to, include sensitive data as a model input but you would like to attempt mitigation based on it. What does the data quality check identify?During the data quality check, there are three basic questions answered for the chosen mitigation feature and the chosen target:Does the mitigation feature have too many rows where the value is completely missing?Are there any values of the mitigation feature that are too rare to allow drawing firm conclusions? For example, consider a dataset with 10,000 rows where the mitigated feature israce. One of the values,Inuit, occurs only seven times, making the sample too small to be representative.Are there any combinations of class plus target that are rare or absent? For example, consider a mitigation feature ofgender. The categoriesMaleandFemaleare both numerous, but the positive target label never occurs inFemalerows.If the quality check does not pass, a warning appears. Address the issues in the dataset, then re-upload and try again.
3. Set theMitigation technique, either: Which fairness metrics does each mitigation techniques use?The mitigation technique names, "pre" and "post," refer to the point in the workflow (as illustrated in the blueprint) where the technique is applied. For example, reweighing is called "preprocessing" because it happens before the model is trained. Rejection Option-based Classification is called post-processing because it happens after the model has been trained. The techniques use the following metrics.TechniqueMetricPreprocessing ReweighingPrimarilyProportional Parity(but may, tangentially, improve other fairness metrics).Postprocessing with Rejection Option-based ClassificationProportional Parity andTrue FavorableandTrue UnfavorableRate Parity
4. Start the model building process. DataRobot automatically attempts mitigation on the top threeeligiblemodels produced by Autopilot against theBias mitigation feature. Mitigated models can be identified by the BIAS MITIGATION badge on the Leaderboard. See the explanation of what makes a modeleligible for mitigation, as well as a table listing ineligible models.
5. Comparebias and accuracy of mitigated vs. unmitigated models.

## Configure metrics and mitigation post-Autopilot

If you did not configure Bias and Fairness prior to model building, you can configure [fairness tests](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#retrain-with-fairness-tests) and [mitigation techniques](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#retrain-with-mitigation) from the Leaderboard.

### Retrain with fairness tests

The following describes applying fairness metrics to models after Autopilot completes.

1. Select a model and clickBias and Fairness > Settings.
2. Follow the advanced options instructions onconfiguring bias and fairness.
3. ClickSave. DataRobot then configures fairness testing for all models in your project based on these settings.

### Retrain with mitigation

After Autopilot has finished, you can apply mitigation to any models that have not already been mitigated. To do so, select [one](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#single-model-retraining) or [multiple](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#multiple-model-retraining) model(s) from the Leaderboard and retrain them with bias mitigation settings applied.

> [!NOTE] Note
> While you cannot retrain an already mitigated model, even on a different protected feature, you can return to the parent and select a different feature or technique for mitigation.

From the [parent model](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#identify-mitigated-models), you can view the Models with Mitigation Applied table. This table lists relationships between the parent model and any child models with mitigation applied. Note the parent model does not have mitigation applied (1). All child mitigated models are listed by model ID (2), including their mitigation settings.

#### Single-model retraining

> [!NOTE] Note
> If you haven't previously completed the Bias and Fairness configuration in advanced options prior to model building, you must first set those fields via the [Bias and Fairness > Settings](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#set-fairness-metrics) tab.

To apply mitigation to a single Leaderboard model after Autopilot completes:

1. Expand anyeligibleLeaderboard model and openBias and Fairness > Bias Mitigation.
2. Configure the fieldsfor bias mitigation.
3. ClickApplyto start building a new, mitigated version of the model. When training is complete, the model can beidentifiedon the Leaderboard by the BIAS MITIGATION badge.
4. Comparebias and accuracy of mitigated vs. unmitigated models.

#### Multiple-model retraining

To apply mitigation to multiple Leaderboard models after Autopilot completes:

1. Use the checkboxes to the left of anyeligiblemodels that have not already been mitigated.
2. From the menu, selectModel processing > Apply bias mitigation for selected models.
3. In the resulting window,configure the fieldsfor bias mitigation.
4. ClickApplyto start building new, mitigated versions of the models. When training is complete, the models can beidentifiedon the Leaderboard by the BIAS MITIGATION badge.
5. Comparebias and accuracy of mitigated vs. unmitigated models.

### Identify mitigated models

The Leaderboard provides several indicators for mitigated and parent (unmitigated versions) models:

- A BIAS MITIGATION badge. Use the Leaderboard search to easily identify all mitigated  models.
- Model naming reflectsmitigation settings(technique, protected feature, and predictor variable status).
- TheBias Mitigationtab includes a link to the original, unmitigated parent model.

### Compare models

Use the [Bias vs Accuracy](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/other/bias-tab.html) tab to compare the bias and accuracy of mitigated vs. unmitigated models. The chart will likely show that mitigated models have higher fairness scores (less bias) than their unmitigated version, but with lower accuracy.

Before a model (mitigated or unmitigated) becomes available on the chart, you must first calculate its fairness scores. To compare mitigated or unmitigated:

1. Open a model displaying the BIAS MITIGATION badge and navigate toBias and Fairness > Per-Class Bias. The fairness score is calculated automatically once you open the tab.
2. Navigate to theBias and Fairness > Bias Mitigationtab to retrieve a link to the parent model. Click the link to open the parent.
3. From the parent model, visit theBias and Fairness > Per-Class Biastab to automatically calculate the fairness score.
4. Open theBias vs Accuracytab and compare the results. In this example, you can see that the mitigated model (shown in green) has higher accuracy (Y-axis) and fairness (X-axis) scores than the parent (shown in magenta).

## Mitigation eligibility

DataRobot selects the top three eligible models for mitigation, and as a result, those labeled with the BIAS MITIGATION badge may not be the top three models on the Leaderboard after Autopilot runs. Other models may be in a higher position on the Leaderboard but will not have mitigation applied because they were ineligible.

If you select Preprocessing Reweighing as the mitigation technique, the following models are ineligible for reweighing because the models don’t use weights:

- Nystroem Kernel SVM Classifier
- Gaussian Process Classifier
- K-nearest Neighbors Classifier
- Naive Bayes Classifier
- Partial Least Squares Classifier
- Legacy Neural Net models: "vanilla" Neural Net Classifier, Dropout Input Neural Net Classifier, "vanilla" Two Layer Neural Net Classifier, Two Hidden Layer Dropout Rectified Linear Neural Net Classifier, (but note that contemporary Keras models can be mitigated)
- Certain basic linear models: Logistic Regression, Regularized Logistic Regression (but note that ElasticNet models can be mitigated)
- Eureqa and Eureqa GAM Classifiers
- Two-stage Logistic Regression
- SVM Classifier, with any kernel

If you select either mitigation technique, the following models and/or projects are ineligible for mitigation:

- Models that have already had bias mitigation applied.
- Majority Class Classifier (predicts a constant value).
- External Predictions models (uses a special column uploaded with the training data, cannot make new predictions).
- Blender models.
- Projects using Smart Downsampling .
- Projects using custom weights.
- Projects where the Mitigation Feature is missing over 50% of its data.
- Time series or OTV projects (i.e., any project with time-based partitioning).
- Projects run with SHAP value support.
- Single-column, standalone text converter models: Auto-Tuned Word N-Gram Text Modeler, Auto-Tuned Char N-Gram Modeler, and Auto-Tuned Summarized Categorical Modeler.

## Bias mitigation considerations

Consider the following when working with bias mitigation:

- Mitigation applies to a single, categorical protected feature.
- For theROBCmitigation technique, the mitigation feature must have at least two classes that each have at least 100 rows in the training data. For thePreprocessing Reweighingtechnique, there is no explicit minimum row count, but mitigation effectiveness may be unpredictable with very small row counts.
