# Select data and display threshold

> Select data and display threshold - Thresholds in the ROC Curve tab in DataRobot set the class
> boundary for a predicted value. The display threshold updates the visualizations and the prediction
> threshold changes the threshold for all predictions made using the model.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.587660+00:00` (UTC).

## Primary page

- [Select data and display threshold](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html): Full documentation for this topic (HTML).

## Sections on this page

- [Select data for visualizations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#select-data-for-visualizations): In-page section heading.
- [Set the display threshold](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#set-the-display-threshold): In-page section heading.
- [Methods of setting the display threshold](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#methods-of-setting-the-display-threshold): In-page section heading.
- [Set the prediction threshold](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#set-the-prediction-threshold): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [Modeling](https://docs.datarobot.com/en/docs/classic-ui/modeling/index.html): Linked from this page.
- [Model insights](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/index.html): Linked from this page.
- [Evaluate](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/index.html): Linked from this page.
- [ROC Curve tools](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/index.html): Linked from this page.
- [ROC Curve tab](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-tab-use.html): Linked from this page.
- [Confusion matrix](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/confusion-matrix-classic.html): Linked from this page.
- [Prediction Distribution graph](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/pred-dist-graph.html): Linked from this page.
- [ROC curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html): Linked from this page.
- [Profit curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/profit-curve-classic.html): Linked from this page.
- [Cumulative charts](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/cumulative-charts-classic.html): Linked from this page.
- [Custom charts](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/custom-charts.html): Linked from this page.
- [Metrics](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/metrics-classic.html): Linked from this page.
- [partitions that have been enabled](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/partitioning.html#partitioning-methods): Linked from this page.
- [Time-aware modeling](https://docs.datarobot.com/en/docs/classic-ui/modeling/time/ts-adv-modeling/ts-date-time.html#lift-roc): Linked from this page.
- [holdout](https://docs.datarobot.com/en/docs/reference/glossary/index.html#holdout): Linked from this page.
- [holdout has not been unlocked](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/build-basic/unlocking-holdout.html): Linked from this page.
- [cross-validation folds](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/data-partitioning.html#k-fold-cross-validation-cv): Linked from this page.
- [Predict > Make Predictions](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/predictions/predict.html#make-predictions-on-an-external-dataset): Linked from this page.
- [make predictions](https://docs.datarobot.com/en/docs/api/dev-learning/python/predictions/index.html): Linked from this page.
- [Deploy](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/deploy-methods/deploy-model.html): Linked from this page.

## Documentation content

# Select data and display threshold

To use [ROC Curve tab](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-tab-use.html) visualizations, you [select a data source](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#select-data-for-the-visualizations) and a [display threshold](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#set-the-display-threshold). These values drive the ROC Curve visualizations:

- Confusion matrix
- Prediction Distribution graph
- ROC curve
- Profit curve
- Cumulative charts
- Custom charts
- Metrics

## Select data for visualizations

To select the data source reflected in ROC Curve visualizations:

1. Select a model on the Leaderboard and navigate toEvaluate > ROC Curve.
2. Click theData Selectiondropdown menu above the Prediction Distribution graph and select a data source to view in the visualizations. NoteTheData Selectionlist includes only thepartitions that have been enabledand run. The list includes all test datasets that have been added to the project; test dataset selections are inactive until they are run.Time-aware modelingallows backtest-based selections. SelectionDescriptionHoldoutVisualizations use theholdoutpartition.Holdoutdoes not appear in the selection list ifholdout has not been unlockedfor the model and run.Cross ValidationVisualizations use thecross-validationpartition. DataRobot "stacks" thecross-validation folds(5 by default) and computes the visualizations on the combined data.ValidationVisualizations use thevalidationpartition.External test dataVisualizations use the data for an external test you have run. If you've added a test dataset but have not yet run it, that test dataset selection is inactive.Add external test dataIf you selectAdd external data, thePredict > Make Predictionstab displays. Use the tab to add test data and run an external test. Then return to the ROC Curve tab, clickData Selection, and select the test data you ran.
3. View the ROC Curve tab visualizations. Update the display threshold (see below) as necessary to meet your modeling goals.

## Set the display threshold

The display threshold is the basis for several visualizations on the ROC Curve tab. The threshold you set updates the Prediction Distribution graph, as well as the Chart, Matrix, and Metrics panes described in the following sections. Experiment with the threshold to meet your modeling goals.

To set the display threshold:

1. On the ROC Curve tab, click theDisplay Thresholddropdown menu. ElementDescription1Display ThresholdDisplays the threshold value you set. Click to select the threshold settings. Note that you can also update the display threshold by clicking in thePrediction Distributiongraph. The Display Threshold defaults to maximize F1.If you switch to a different model, the Display Threshold updates to maximize F1 for the new model. This allows you to easily compare classification results between models. If you select a different data source (by selectingHoldout,Cross Validation, orValidationin theData Selectionlist), the Display Threshold updates to maximize F1 for the new data.2ThresholdDrag the slider or enter a display threshold value; the visualization tools update accordingly.3Maximize optionSelect a threshold that maximizesmetricssuch as the F1 score, MCC (Matthews Correlation Coefficient), or profit. To maximize for profit, first set a payoff by clicking+Add payoffon theMatrixpane.The metrics values on the ROC curve display might not always match those shown on the Leaderboard. For ROC curve metrics, DataRobot keeps up to 120 of the calculated thresholds that best represent the distribution. Because of this, minute details might be lost. For example, if you selectMaximize MCCas thedisplay threshold, DataRobot preserves the top 120 thresholds and calculates the maximum among them. This value is usually very close but may not exactly match the metric value.4Use as Prediction ThresholdClick to set thePrediction Thresholdto the current value of theDisplay Threshold. By doing so, at prediction time, the threshold value serves as the boundary between positive and negative classifications—observations above the threshold receive the positive class's label and those below the threshold receive the negative class's label. ThePrediction Thresholdis used when you generateprofit curvesand when youmake predictions.5View Prediction ThresholdClick to reset the  visualization components (graphs and charts) to the model's prediction threshold.6Threshold TypeSelectTop % of highest predictionsor aPrediction value (0 - 1). SeeThreshold Typefor details. In this example, theDisplay Thresholdis set to 0.2396, which maximizes the F1 score.
2. View the updated visualizations. Valid input for the Display Threshold changes the following page elements: NoteThe displays for the visualizations represents the closest data point to the specified threshold (i.e., if you entered 20%, the display might actually be something like 20.7%). The box reports the exact value after you enter return.

### Methods of setting the display threshold

Click a tab to view alternative methods of setting the display threshold:

**Specify the threshold:**
On the
ROC Curve
tab, click the
Display Threshold
dropdown menu.
Use the slider or enter a value to set the display threshold.
If the
Threshold Type
is
Top %
, enter a value between 0 and 100 (which will update to the exact point after entry). If the
Threshold Type
is
Prediction value
, enter a number between 0.0 and 1.0. If the input is not valid, a warning appears to the right.
Click outside of the dropdown to view the effects of the display threshold on the visualization tools.

**Set to maximized metric:**
Select a
metric
maximum to use for the display threshold. Choose from F1, MCC, or profit. The metrics' maximum values display:
Note
You must set the
Matrix
pane to a Payoff Matrix to be able to maximize profit. Otherwise, the
Maximize profit
option is grayed out.
Click outside of the dropdown to view the effects of the display threshold on the visualization tools.

**Prediction Distribution graph:**
Hover over the Prediction Distribution graph until a "ghost" line appears with the corresponding value above it.
Click to automatically update the display threshold to the new selected value.


## Set the prediction threshold

Prediction requests for binary classification models return both a probability of the positive class and a label. Although DataRobot automatically calculates a threshold (the [display threshold](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#set-the-display-threshold)), when applying the label at prediction time, the threshold value defaults to `0.5`. In the resulting predictions, records with values above the threshold will have the positive class's label (in addition to the probability) based on this threshold. If this value causes a need for post-processing predictions to apply the actual threshold label, you can bypass that step by changing the prediction threshold.

To set the prediction threshold:

1. On theROC Curvetab, click theDisplay Thresholddropdown menu.
2. Update the display threshold if necessary.
3. SelectUse as Prediction Threshold. Once deployed, all predictions made with this model that fall above the new threshold will return the positive class label.

The Prediction Threshold value set here is also saved to the following tabs:

- Make Predictions
- Deploy

Changing the value in any of these tabs writes the new value back to all the tabs. Once a model is deployed, the threshold cannot be changed within that deployment.

To return the setting to the default threshold value of `0.5`, click View Prediction Threshold.
