Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

SHAP reference

SHAP (SHapley Additive exPlanations) is an open-source algorithm used to address the accuracy vs. explainability dilemma. SHAP is based on Shapley Values, the coalitional game theory framework by Lloyd Shapley, Nobel Prize-winning economist. The values are a unified measure of feature importance and are used to interpret predictions from machine learning models. SHAP values not only tell us about the importance of a feature, but also about the direction of the relationship (positive or negative).

The SHAP values of a feature sum up to the difference between the prediction for the instance and the expected prediction (averaged across the dataset). If we consider a model that makes predictions based on several input features, the SHAP value for each feature would represent the average marginal contribution of that feature across all possible feature combinations.

To understand the origination of the project:

Lloyd Shapley asked: How should we divide a payout among a cooperating team whose members made different contributions?

Shapley values answers:

  • The Shapley value for member X is the amount of credit they get.
  • For every subteam, how much marginal value does member X add when they join the subteam? Shapley value is the weighted mean of this marginal value.
  • Total payout is the sum of Shapley values over members.

Scott Lundberg is the primary author of the SHAP Python package, providing a programmatic way to explain predictions:

We can divide credit for model predictions among features!

By assuming that each value of a feature is a “player” in a game, the prediction is the payout. SHAP explains how to fairly distribute the “payout” among features.

SHAP has become increasingly popular due to the SHAP open source package that developed:

  • A high-speed exact algorithm for tree ensemble methods (called "TreeExplainer").
  • A high-speed approximation algorithm for deep learning models (called "DeepExplainer").
  • Several model-agnostic algorithms to estimate Shapley values for any model (including "KernelExplainer" and "PermutationExplainer").

The following key properties of SHAP make it particularly suitable for DataRobot machine learning:

  • Local accuracy: The sum of the feature attributions is equal to the output of the model DataRobot is "explaining."
  • Missingness: Features that are already missing have no impact.
  • Consistency: Changing a model to make a feature more important to the model will never decrease the SHAP attribution assigned to that feature. (For example, model A uses feature X. You then make a new model, B, that uses feature X more heavily (perhaps by doubling the coefficient for that feature and keeping everything else the same). Because of the consistency quality of SHAP, the SHAP importance for feature X in model B is at least as high as it was for feature X in model A.)

Additional readings are listed below.

SHAP contributes to model explainability by:

  • Feature Impact: SHAP shows, at a high level, which features are driving model decisions. Without SHAP, results are sensitive to sample size and can change when re-computed unless the sample is quite large. See the deep dive.

  • Prediction explanations: There are certain types of data that don't lend themselves to producing results for all columns. This is especially problematic in regulated industries like banking and insurance. SHAP explanations reveal how much each feature is responsible for a given prediction being different from the average. For example, when a real estate record is predicted to sell for $X, SHAP prediction explanations illustrate how much each feature contributes to that price.

  • Feature Effects:

    • For Workbench, Feature Effects always uses Permutation Feature Impact.
    • For DataRobot Classic, SHAP does not change the Feature Effects results. The Predicted, Actual, and Partial dependence plots do not use SHAP in any way. However, the bar chart on the left is ordered by SHAP Feature Impact instead of the usual Permutation Feature Impact.

Note

To retrieve the SHAP-based Feature Impact or Prediction Explanations visualizations, you must enable the Include only models with SHAP value support advanced option prior to model building.

Feature Impact

Feature Impact assigns importance to each feature (j) used by a model.

With SHAP

Given a model and some observations (up to 5000 rows in the training data), Feature Impact for each feature j is computed as:

     sample average of abs(shap_values for feature j)

Normalize values such that the top feature has an impact of 100%.

With permutation

Given a model and some observations (2500 by default and up to 100,000), calculate the metric for the model based on the actual data. For each column j:

  • Permute the values of column j.
  • Calculate metrics on permuted data.
  • Importance = metric_actual - metric_perm

(Optional) Normalize by the largest resulting value.

Prediction explanations

SHAP prediction explanations are additive. The sum of SHAP values is exactly equal to:

    [prediction - average(prediction)]

DataRobot Classic only (Workbench is SHAP-only) When selecting between XEMP and SHAP, consider your need for accuracy versus interpretability and performance. With XEMP, because all blueprints are included in Autopilot, the results may produce slightly higher accuracy. This is only true in some cases, however, since SHAP supports all key blueprints, meaning that often the accuracy is the same. SHAP does provide higher interpretability and performance:

  • Results are intuitive.
  • SHAP is computed for all features.
  • Results often return 5-20 times faster.
  • SHAP is additive.
  • The open source nature provides transparency.

Additivity in prediction explanations

In certain cases, you may notice that SHAP values do not add up to the prediction. This is because SHAP values are additive in the units of the direct model output, which can be different from the units of prediction for several reasons.

  • For most binary classification problems, the SHAP values correspond to a scale that is different from the probability space [0,1]. This is due to the way that these algorithms map their direct outputs y to something always between 0 and 1, most commonly using a nonlinear function like the logistic function prob = logistic(y). (In technical terms, the model's "link function" is logit(p), which is the inverse of logistic(y).) In this common situation, the SHAP values are additive in the pre-link "margin space", not in the final probability space. This means sum(shap_values) = logit(prob) - logit(prob_0), where prob_0 is the training average of the model's predictions.

  • Regression problems with a skewed target may use the natural logarithm log() as a link function in a similar way.

  • The model may have specified an offset (applied before the link) and/or an exposure (applied after the link).

  • The model may "cap" or "censor" its predictions (for example, enforcing them to be non-negative).

The following pseudocode can be used for verifying additivity in these cases.

# shap_values = output from SHAP prediction explanations
# If you obtained the base_value from the UI prediction distribution chart, first transform it by the link.
base_value = api_shap_base_value or link_function(ui_shap_base_value)
pred = base_value + sum(shap_values)
if offset is not None:
    pred += offset
if link_function == 'log':
    pred = exp(pred)
elif link_function == 'logit:
    pred = exp(pred) / (1 + exp(pred))
if exposure is not None:
    pred *= exposure
pred = predictions_capping(pred)
# at this point, pred matches the prediction output from the model

Which explainer is used for which model?

Within a blueprint that supports SHAP, each modeling vertex uses the SHAP explainer that is most appropriate to the model type:

  • Tree-based models (XGBoost, LightGBM, Random Forest, Decision Tree): TreeExplainer
  • Keras deep learning models: DeepExplainer
  • Linear models: LinearExplainer

If a blueprint contains more than one modeling task, the SHAP values are combined additively to yield the SHAP values for the overall blueprint. If a blueprint does not work with the type-specific explainers listed above, then SHAP explanations are provided by the model-agnostic PermutationExplainer.

SHAP compatibility matrix

See the following for blueprint support with SHAP:

Blueprint Regression Binary classification OTV Prediction servers
Linear models
XGBoost
LightGBM
Keras
Random Forest x x x x
Shallow Random Forest
Frequency Cost / Severity N/A N/A
Stacked / boosted blueprints
Blueprints with calibration
Blenders x x x x
sklearn GBM x x
DataRobot Scoring Code models x x x x

Which explainer is used for which model?

Within a blueprint that supports SHAP, each modeling vertex uses the SHAP explainer that is most appropriate to the model type:

  • Tree-based models (XGBoost, LightGBM, Random Forest, Decision Tree): TreeExplainer
  • Keras deep learning models: DeepExplainer
  • Linear models: LinearExplainer

If a blueprint contains more than one modeling task, the SHAP values are combined additively to yield the SHAP values for the overall blueprint.

Additional reading

The following public information provides additional information on open-source SHAP:


Updated April 16, 2024