SHAP reference¶
SHAP (SHapley Additive exPlanations) is an opensource algorithm used to address the accuracy vs. explainability dilemma. SHAP is based on Shapley Values, the coalitional game theory framework by Lloyd Shapley, Nobel Prizewinning economist. The values are a unified measure of feature importance and are used to interpret predictions from machine learning models. SHAP values not only tell us about the importance of a feature, but also about the direction of the relationship (positive or negative).
The SHAP values of a feature sum up to the difference between the prediction for the instance and the expected prediction (averaged across the dataset). If we consider a model that makes predictions based on several input features, the SHAP value for each feature would represent the average marginal contribution of that feature across all possible feature combinations.
To understand the origination of the project:
Lloyd Shapley asked: How should we divide a payout among a cooperating team whose members made different contributions?
Shapley values answers:
 The Shapley value for member X is the amount of credit they get.
 For every subteam, how much marginal value does member X add when they join the subteam? Shapley value is the weighted mean of this marginal value.
 Total payout is the sum of Shapley values over members.
Scott Lundberg is the primary author of the SHAP Python package, providing a programmatic way to explain predictions:
We can divide credit for model predictions among features!
By assuming that each value of a feature is a “player” in a game, the prediction is the payout. SHAP explains how to fairly distribute the “payout” among features.
SHAP has become increasingly popular due to the SHAP open source package that developed:
 A highspeed exact algorithm for tree ensemble methods (called "TreeExplainer").
 A highspeed approximation algorithm for deep learning models (called "DeepExplainer").
 Several modelagnostic algorithms to estimate Shapley values for any model (including "KernelExplainer" and "PermutationExplainer").
The following key properties of SHAP make it particularly suitable for DataRobot machine learning:
 Local accuracy: The sum of the feature attributions is equal to the output of the model DataRobot is "explaining."
 Missingness: Features that are already missing have no impact.
 Consistency: Changing a model to make a feature more important to the model will never decrease the SHAP attribution assigned to that feature. (For example, model A uses feature X. You then make a new model, B, that uses feature X more heavily (perhaps by doubling the coefficient for that feature and keeping everything else the same). Because of the consistency quality of SHAP, the SHAP importance for feature X in model B is at least as high as it was for feature X in model A.)
Additional readings are listed below.
SHAP contributes to model explainability by:

Feature Impact: SHAP shows, at a high level, which features are driving model decisions. Without SHAP, results are sensitive to sample size and can change when recomputed unless the sample is quite large. See the deep dive.

Prediction explanations: There are certain types of data that don't lend themselves to producing results for all columns. This is especially problematic in regulated industries like banking and insurance. SHAP explanations reveal how much each feature is responsible for a given prediction being different from the average. For example, when a real estate record is predicted to sell for $X, SHAP prediction explanations illustrate how much each feature contributes to that price.

Feature Effects:
 For Workbench, Feature Effects always uses Permutation Feature Impact.
 For DataRobot Classic, SHAP does not change the Feature Effects results. The Predicted, Actual, and Partial dependence plots do not use SHAP in any way. However, the bar chart on the left is ordered by SHAP Feature Impact instead of the usual Permutation Feature Impact.
Feature Impact¶
Feature Impact assigns importance to each feature (j
) used by a model.
With SHAP¶
Given a model and some observations (up to 5000 rows in the training data), Feature Impact for each feature j
is computed as:
sample average of abs(shap_values for feature j)
Normalize values such that the top feature has an impact of 100%.
With permutation¶
Given a model and some observations (2500 by default and up to 100,000), calculate the metric for the model based on the actual data. For each column j
:
 Permute the values of column
j
.  Calculate metrics on permuted data.
 Importance =
metric_actual  metric_perm
(Optional) Normalize by the largest resulting value.
Prediction explanations¶
SHAP prediction explanations are additive. The sum of SHAP values is exactly equal to:
[prediction  average(prediction)]
DataRobot Classic only (Workbench is SHAPonly) When selecting between XEMP and SHAP, consider your need for accuracy versus interpretability and performance. With XEMP, because all blueprints are included in Autopilot, the results may produce slightly higher accuracy. This is only true in some cases, however, since SHAP supports all key blueprints, meaning that often the accuracy is the same. SHAP does provide higher interpretability and performance:
 Results are intuitive.
 SHAP is computed for all features.
 Results often return 520 times faster.
 SHAP is additive.
 The open source nature provides transparency.
Additivity in prediction explanations¶
In certain cases, you may notice that SHAP values do not add up to the prediction. This is because SHAP values are additive in the units of the direct model output, which can be different from the units of prediction for several reasons.

For most binary classification problems, the SHAP values correspond to a scale that is different from the probability space [0,1]. This is due to the way that these algorithms map their direct outputs
y
to something always between 0 and 1, most commonly using a nonlinear function like the logistic functionprob = logistic(y)
. (In technical terms, the model's "link function" islogit(p)
, which is the inverse oflogistic(y)
.) In this common situation, the SHAP values are additive in the prelink "margin space", not in the final probability space. This meanssum(shap_values) = logit(prob)  logit(prob_0)
, whereprob_0
is the training average of the model's predictions. 
Regression problems with a skewed target may use the natural logarithm
log()
as a link function in a similar way. 
The model may have specified an offset (applied before the link) and/or an exposure (applied after the link).

The model may "cap" or "censor" its predictions (for example, enforcing them to be nonnegative).
The following pseudocode can be used for verifying additivity in these cases.
# shap_values = output from SHAP prediction explanations
# If you obtained the base_value from the UI prediction distribution chart, first transform it by the link.
base_value = api_shap_base_value or link_function(ui_shap_base_value)
pred = base_value + sum(shap_values)
if offset is not None:
pred += offset
if link_function == 'log':
pred = exp(pred)
elif link_function == 'logit:
pred = exp(pred) / (1 + exp(pred))
if exposure is not None:
pred *= exposure
pred = predictions_capping(pred)
# at this point, pred matches the prediction output from the model
Which explainer is used for which model?¶
Within a blueprint that supports SHAP, each modeling vertex uses the SHAP explainer that is most appropriate to the model type:
 Treebased models (XGBoost, LightGBM, Random Forest, Decision Tree): TreeExplainer
 Keras deep learning models: DeepExplainer
 Linear models: LinearExplainer
If a blueprint contains more than one modeling task, the SHAP values are combined additively to yield the SHAP values for the overall blueprint. If a blueprint does not work with the typespecific explainers listed above, then SHAP explanations are provided by the modelagnostic PermutationExplainer.
Additional reading¶
The following public information provides additional information on opensource SHAP: