Shapley Residuals: Quantifying the limits of the Shapley value for explanations

I. Elizabeth Kumar, Carlos Scheidegger, Suresh Venkatasubramanian, Sorelle Friedler. Presented at the 5th ICML Workshop on Human Interpretability in Machine Learning (WHI), 2020.

Abstract

Popular feature importance techniques compute additive approximations to nonlinear models by first defining a cooperative game describing the value of different subsets of the model’s features, then calculating the resulting game’s Shapley values to attribute credit additively between the features. However, the specific modeling settings in which the Shapley values are a poor approximation for the true game have not been well-described. In this paper we utilize an interpretation of Shapley values as the result of an orthogonal projection between vector spaces to calculate a residual representing the kernel component of that projection. We provide an algorithm for computing these residuals, characterize different modeling settings based on the value of the residuals, and demonstrate that they capture information about model predictions that Shapley values cannot.

Events

July 17, 2020, 5:30-6:30 am and 9:30-10:30 am Eastern: Spotlight talks at WHI 2020