# SHAP Prediction Explanations

> SHAP Prediction Explanations - Enable SHAP-based Prediction Explanations prior to building tree- and
> linear-based models to understand which features drive each model decision.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.594063+00:00` (UTC).

## Primary page

- [SHAP Prediction Explanations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html): Full documentation for this topic (HTML).

## Sections on this page

- [Preview Prediction Explanations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#preview-prediction-explanations): In-page section heading.
- [Interpret SHAP Prediction Explanations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#interpret-shap-prediction-explanations): In-page section heading.
- [View points in the distribution](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#view-points-in-the-distribution): In-page section heading.
- [Computing and downloading explanations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#computing-and-downloading-explanations): In-page section heading.
- [Upload a dataset](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#upload-a-dataset): In-page section heading.
- [Prediction Explanation calculations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#prediction-explanation-calculations): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [Modeling](https://docs.datarobot.com/en/docs/classic-ui/modeling/index.html): Linked from this page.
- [Model insights](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/index.html): Linked from this page.
- [Understand](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/index.html): Linked from this page.
- [Prediction Explanations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/index.html): Linked from this page.
- [Include only models with SHAP value support](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/additional.html): Linked from this page.
- [SHAP reference](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/shap-ref.html): Linked from this page.
- [considerations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/predex-overview.html#feature-considerations): Linked from this page.
- [stacked predictions](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/data-partitioning.html#what-are-stacked-predictions): Linked from this page.
- [Make Predictions](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/predictions/predict.html): Linked from this page.

## Documentation content

# SHAP Prediction Explanations

> [!NOTE] SHAP vs XEMP
> This section describes SHAP-based Prediction Explanations. See also the general description of Prediction Explanations for an overview of [SHAP and XEMP methodologies](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/index.html).
> 
> In the DataRobot Classic UI, to retrieve SHAP-based Prediction Explanations, you must enable the [Include only models with SHAP value support](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/additional.html) advanced option prior to model building.

SHAP-based explanations describe what drives predictions on a row-by-row basis by providing an estimation of how much each feature contributes to a given prediction differing from the average. They answer why a model made a certain prediction—What drives a customer's decision to buy—age? gender? buying habits? Then, they help identify the impact on the decision for each factor. They are intuitive, unbounded (computed for all features), fast, and, due to the open source nature of SHAP, transparent. Not only does SHAP provide the benefit of helping you better understand model behavior—and quickly—it also allows you to easily validate if a model adheres to business rules.

See the [SHAP reference](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/shap-ref.html) for additional technical detail. See the associated SHAP [considerations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/predex-overview.html#feature-considerations) for important additional information.

## Preview Prediction Explanations

SHAP-based Prediction Explanations, when previewed, display the top five features for each row. This provides a general "intuition" of model performance. You can then quickly [compute and download](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#computing-and-downloading-explanations) explanations for the entire training dataset to perform a deeper analytics. See [SHAP calculations](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#prediction-explanation-calculations) for more detail.

You can also:

- Uploadexternal datasetsand manually compute (and download) explanations.
- Access explanations via the API, for both deployed and Leaderboard models.

## Interpret SHAP Prediction Explanations

Open the Prediction Explanations tab to see an interactive preview of the top five features that contribute most to the difference from the average (base) prediction value. In other words, how much does each feature explain the difference? For example:

The elements describe:

|  | Element | Value in example |
| --- | --- | --- |
| (1) | Base (average) prediction value | 43.11 |
| (2) | Prediction value for the row | 67.5 |
| (3) | Contribution, or how much each feature explains the difference between the base and prediction values | Varies from row to row and from feature to feature |
| (4) | Top 5 features | Varies from row to row |

Subtract the base prediction value from the row prediction value to determine the difference from the average, in this case 24.4. The contribution then describes how much each listed feature is responsible for pushing the target away from the average (the allocation of 24.4 between the features).

SHAP is additive which means that the sum of all contributions for all features equals the difference between the base and row prediction values. (See additivity details [here](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/shap-ref.html#additivity-in-prediction-explanations).)

Some additional notes on interpreting the visualization:

- Contributions can be either positive or negative. Features that push the predictive value to be higher display in red and are positive numbers. Features that reduce the prediction display in blue and are negative numbers.
- The arrows on the plot are proportionate to the SHAP values positively and negatively impacting the observed prediction.
- The "Sum of all other features" is the sum of features that are not part of the top five  contributors.

See the SHAP reference for information on additivity (including [possible breakages](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/shap-ref.html#additivity-in-prediction-explanations)).

### View points in the distribution

Use the prediction distribution component to click through a range of prediction values and understand how the top and bottom values are explained. In the chart, the Y-axis shows the prediction value, while the X-axis indicates the frequency.

Notice that if you look at a point near the bottom of the distribution, the contribution values show more blue than red values (more negative than positive contributions). This is because majority of key features are pushing the prediction value to be lower.

## Computing and downloading explanations

While DataRobot automatically computes the explanations for selected records, you can compute explanations for all records by clicking the calculator ( [https://docs.datarobot.com/en/docs/images/icon-calc.png](https://docs.datarobot.com/en/docs/images/icon-calc.png)) icon. DataRobot computes the remaining explanations and when ready, activates a download button. Click to save the list of explanations as a CSV file. Note that the CSV will only contain the top 100 explanations for each record. To see all explanations, use the API.

## Upload a dataset

To compute explanations for additional data using the same model, click Upload new dataset:

DataRobot opens the [Make Predictions](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/predictions/predict.html) tab where you can upload a new, external dataset. When complete, return to Prediction Explanations, where the new dataset is listed in the download area.

Compute ( [https://docs.datarobot.com/en/docs/images/icon-calc.png](https://docs.datarobot.com/en/docs/images/icon-calc.png)) and then download explanations in the same way as with the training dataset. DataRobot runs computations for the entire external set.

## Prediction Explanation calculations

DataRobot automatically computes SHAP Prediction Explanations. In the UI, SHAP initially returns the five most important features in each previewed row. Additional features are bundled and reported in `Sum of all other features`. (You can [compute for all features](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/understand/pred-explain/shap-pe.html#computing-and-downloading-explanations) as described above.) In the API, explanations for a given row are limited to the top 100 most important features in that row. If there are more features, they get bundled together in the `shapRemainingTotal` value. See the public API documentation for more detail.
