# ROC curve

> ROC curve - The ROC curve visualization in DataRobot helps you explore classification, performance,
> and statistics for a selected model. ROC curves plot the true positive rate against the false
> positive rate for a given data source.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.587230+00:00` (UTC).

## Primary page

- [ROC curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html): Full documentation for this topic (HTML).

## Sections on this page

- [Evaluate a model using the ROC curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html#evaluate-a-model-using-the-roc-curve): In-page section heading.
- [Analyze the ROC curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html#analyze-the-roc-curve): In-page section heading.
- [ROC curve shape](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html#roc-curve-shape): In-page section heading.
- [Area under the ROC curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html#area-under-the-roc-curve): In-page section heading.
- [Kolmogorov-Smirnov (KS) metric](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html#kolmogorov-smirnov-ks-metric): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [Modeling](https://docs.datarobot.com/en/docs/classic-ui/modeling/index.html): Linked from this page.
- [Model insights](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/index.html): Linked from this page.
- [Evaluate](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/index.html): Linked from this page.
- [ROC Curve tools](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/index.html): Linked from this page.
- [data source](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#select-data-for-visualizations): Linked from this page.

## Documentation content

# ROC curve

The ROC curve visualization (on the [ROC Curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/index.html) tab) helps you explore classification, performance, and statistics for a selected model. ROC curves plot the true positive rate against the false positive rate for a given data source.

## Evaluate a model using the ROC curve

1. Select a model on the Leaderboard and navigate toEvaluate > ROC Curve.
2. Select adata sourceand set thedisplay threshold. The ROC curve displays in the center of the ROC Curve tab. The curve is highlighted with the following elements:

## Analyze the ROC curve

View the ROC curve and consider the following:

- The shape of the curve
- The area under the curve (AUC)
- The Kolmogorov-Smirnov (KS) metric

### ROC curve shape

Use the ROC curve to assess model quality. The curve, drawn based on each value in the dataset, plots the true positive rate against the false positive rate. Some takeaways from an ROC curve:

- An ideal curve grows quickly for small x-values, and slows for values of x closer to 1.
- The curve illustrates the tradeoff between sensitivity and specificity. An increase in sensitivity results in a decrease in specificity.
- A "perfect" ROC curve yields a point in the top left corner of the chart (coordinate (0,1)), indicating no false negatives and no false positives (a high true positive rate and a low false positive rate).
- The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the model and closer it is to a random assignment model.
- The shape of the curve is determined by the overlap of the classification distributions.

### Area under the ROC curve

The AUC (area under the curve) is literally the lower-right area under the ROC Curve.

> [!NOTE] Note
> AUC does not display automatically in the Metrics pane. Click Select metrics and select Area Under the Curve (AUC) to display it.

AUC is a metric for binary classification that considers all possible thresholds and summarizes performance in a single value, reported in the bottom right of the graph. The larger the area under the curve, the more accurate the model, however:

- An AUC of 0.5 suggests that predictions based on this model are no better than a random guess.
- An AUC of 1.0 suggests that predictions based on this model are perfect, and because a perfect model is highly uncommon, it is likely flawed (target leakage is a common cause of this result).

[StackExchange](http://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it?) provides an excellent explanation of AUC.

### Kolmogorov-Smirnov (KS) metric

For binary classification projects, the KS optimization metric measures the maximum distance between two non-parametric distributions.

The KS metric evaluates and ranks models based on the degree of separation between true positive and false positive distributions.

> [!NOTE] Note
> The KS metric does not display automatically in the Metrics pane. Click Select metrics and select Kolmogorov-Smirnov Score to display it.

For a complete description of the Kolmogorov–Smirnov test (K–S test or KS test), see the [Wikipedia](https://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test) article on the topic.
