# Multilabel Per-Label Metrics

> Multilabel Per-Label Metrics - Per-Label Metrics summarize performance across one, several, or zero
> different label values of the prediction threshold.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.754002+00:00` (UTC).

## Primary page

- [Multilabel Per-Label Metrics](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html): Full documentation for this topic (HTML).

## Sections on this page

- [Overview](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#overview): In-page section heading.
- [Metric value table](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#metric-value-table): In-page section heading.
- [Threshold selector](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#threshold-selector): In-page section heading.
- [Metric value chart](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#metric-value-chart): In-page section heading.
- [Display label metrics](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#display-label-metrics): In-page section heading.
- [Show option](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#show-option): In-page section heading.
- [Pinning labels](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#pinning-labels): In-page section heading.

## Related documentation

- [NextGen UI documentation](https://docs.datarobot.com/en/docs/workbench/index.html): Linked from this page.
- [Workbench](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/index.html): Linked from this page.
- [Predictive experiments](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/index.html): Linked from this page.
- [Evaluate models](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/index.html): Linked from this page.
- [during experiment setup](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/create-experiments/create-predictive/ml-basic-experiment.html#multilabel-targets): Linked from this page.
- [Lift Chart](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/lift-chart.html): Linked from this page.
- [ROC Curve](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/roc-curve.html): Linked from this page.
- [Feature Effects](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/feature-effects.html): Linked from this page.
- [Word Cloud](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/word-cloud.html): Linked from this page.
- [ROC Curve metrics](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/metrics-classic.html): Linked from this page.
- [graph interpretation](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html): Linked from this page.
- [classification use case](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-tab-use.html#classification-use-case-1): Linked from this page.
- [payoff matrix](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/profit-curve-classic.html): Linked from this page.
- [display threshold](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/threshold.html#set-the-display-threshold): Linked from this page.

## Documentation content

# Multilabel: Per-Label Metrics

> [!NOTE] Availability information
> Availability of multilabel modeling is dependent on your DataRobot package. If it is not enabled for your organization, contact your DataRobot representative for more information.

| Tab | Description |
| --- | --- |
| Performance | Summarizes performance across one, several, or zero different label values of the prediction threshold. |

Multilabel: Per-Label Metrics is a visualization designed specifically for multilabel models. It helps to evaluate a model by summarizing performance across the labels for different values of the prediction threshold (which can be set from the page). Configure multilabel modeling [during experiment setup](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/create-experiments/create-predictive/ml-basic-experiment.html#multilabel-targets).

In addition to this insight, multilabel-specific modeling insights are available from the following Leaderboard insights:

- Lift Chart
- ROC Curve
- Feature Effects
- Word Cloud

Use the Label dropdown to generate the insight for a selected label:

## Overview

The Per-Label metrics chart depicts binary performance metrics, treating each label as a binary feature. Specifically it:

- Displays average and per-label model performance, based on the prediction threshold, for a selectable metric.
- Helps to assess the number of labels performing well versus the number of labels performing badly.

The table below describes the areas of the Multilabel: Per-Label Metrics chart. See also detailed descriptions of the [ROC Curve metrics](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/metrics-classic.html) and [graph interpretation](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html).

|  | Component | Description |
| --- | --- | --- |
| (1) | Metric value table | Displays model performance for each target label. Changing the display or prediction threshold updates the table. |
| (2) | Threshold selector | Sets whether to display values for the display or prediction thresholds. Changing either value updates the metric value table and chart. |
| (3) | Metric value chart and metric selector | Displays graphed results based on the set display threshold. Use the dropdown to select the performance metric to display in the chart. |
| (4) | Average performance report | The macro-averaged model performance, over all labels, for each metric. Metrics are defined in the deep dive below. |
| (5) | Label and data selectors | Sets the data partition—validation, cross validation, or holdout (if unlocked)—to report per-label values for. Display all or only pinned (selected) labels. |

### Metric value table

The metric value table reports a model's performance for each target label (considered as a binary feature). The metrics in the table correspond to the [Display threshold](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/multilabel.html#threshold-selector); change the threshold value to view label metrics at different threshold values.

Set the metric value table to All labels to see metric values for each label in the experiment. Use the controls at the bottom of the table to page through the display and explore all labels. Additionally, change the table view as follows:

|  | Action |
| --- | --- |
| (1) | Use the search field to modify the table to display only those labels that match the search criteria. |
| (2) | Click on a column header to change the sort order of labels in the table. |
| (3) | Click the Show option to include (or remove) a specific label's results from the metric value chart. The option works whether you are displaying all or only pinned labels. |
| (4) | Click the pin to include (or remove) the selected label from the chart display to the left. |

The ID column (#) is static and allows you to assess, together with sorting, the labels for which the metric of interest is above or below a given value.

### Threshold selector

The threshold section provides a point for inputting both a Display threshold and a Prediction threshold.

| Use | To |
| --- | --- |
| Display threshold | Set the threshold level. Changes to the value both update the display and the metric value table to the right, which shows average model performance. |
| Prediction threshold | Set the model prediction threshold, which is applied when making predictions. |
| Arrows | Swap values for the current display and prediction thresholds. |

Note that only Use Case owners can update the prediction threshold.

### Metric value chart

The chart consists of a graphed results and a metric selector:

The X-axis in the diagram represents different values of the prediction threshold. The Y-axis plots values for the selected metric. Overall, the diagram illustrates the average model performance curve, based on the selected metric. The threshold value set in the Display threshold is indicated by round, unfilled point on the line. Changes to the threshold and/or metric update the graph

### Display label metrics

By default, the metric value chart displays the average value, as a white line, across all labels for the selected metric. You can highlight one or more labels to compare their metric values against the average. 
The color of the label name changes to match its line entry in the chart.

#### Show option

Select Show next to a label to add the individual results for that label to the chart.

For example, consider a project with 100 labels. If measuring for accuracy above 0.7, sort by accuracy and look at the row index with the last accuracy value above 0.7. You can determine the percentage of labels with that accuracy or above from the row index with relation to the total number of rows.

When you pin a label, Show is automatically enabled. Click the eye again to remove the label.

#### Pinning labels

Use the pin option to select particular labels for display in the chart. Pinning a label automatically enables the Show option for that label, adding its metric value to the chart. After pinning labels, use the Pinned labels tab to show only those labels you selected.

Toggling back to All labels preserves the label's entry on the chart.
