Generalized SHAP: Generating multiple types of explanations in machine learning

06/12/2020
by   Dillon Bowen, et al.
0

Many important questions about a model cannot be answered just explaining how much each feature contributes its output. To answer a broader set of questions, we generalize a popular, mathematically well-grounded explanation technique, Shapley Additive Explanations (SHAP). Our new method - Generalized Shapley Additive Explanations (G-SHAP) - produces many additional types of explanations including: 1) General classification explanations; Why is this sample more likely to belong to one class rather than another? 2) Intergroup differences; Why do our model's predictions differ between groups of observations? 3) Model failure; Why does our model perform poorly on a given sample? We formally define these types of explanations and illustrate their practical use on real data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset