# Individual Prediction Explanations (XEMP)

> Individual Prediction Explanations (XEMP) - Provides row-by-row detail about Individual Prediction
> Explanations based on XEMP, a proprietary DataRobot method.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.756416+00:00` (UTC).

## Primary page

- [Individual Prediction Explanations (XEMP)](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/xemp-predex.html): Full documentation for this topic (HTML).

## Sections on this page

- [XEMP-based explanations](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/xemp-predex.html#xemp-based-explanations): In-page section heading.

## Related documentation

- [NextGen UI documentation](https://docs.datarobot.com/en/docs/workbench/index.html): Linked from this page.
- [Workbench](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/index.html): Linked from this page.
- [Predictive experiments](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/index.html): Linked from this page.
- [Evaluate models](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/index.html): Linked from this page.
- [SHAP explanations](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/shap-predex.html): Linked from this page.
- [DataRobot Classic documentation](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/xemp-pe.html): Linked from this page.

## Documentation content

# Individual Prediction Explanations (XEMP)

| Tab | Description |
| --- | --- |
| Performance | Uses XEMP to as the basis of generating prediction explanations, which help to understand what drives predictions. Only available for models and projects that do not support SHAP. |

> [!NOTE] Note
> Anomaly detection, multiclass, and clustering models do not support [SHAP explanations](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/experiments/experiment-insights/shap-predex.html) and therefore use XEMP. All other model types use SHAP.

## XEMP-based explanations

XEMP-based explanations are a proprietary DataRobot method, available for all model types. They are univariate, letting you view the distribution of the effect each specific feature has on predictions. (SHAP, by contrast, is multivariate, measuring the effect of varying multiple features at once.) XEMP explanations are only available if SHAP is not supported by a model or experiment type; the appropriate Individual Prediction Explanation type is determined by DataRobot and made available when you select a model.

To access XEMP insights, click a model in the Leaderboard and choose Individual Prediction Explanation (XEMP) to expand the display. If prompted, click Compute Feature Impact.

After successful computation, the preview displays. See the [DataRobot Classic documentation](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/xemp-pe.html) for full details on working with the preview, interpreting the display, and computing and downloading explanations.
