Entropic Variable Boosting for Explainability and Interpretability in Machine Learning

10/18/2018
by   François Bachoc, et al.
0

In this paper, we present a new explainability formalism to make clear the impact of each variable on the predictions given by black-box decision rules. Our method consists in evaluating the decision rules on test samples generated in such a way that each variable is stressed incrementally while preserving the original distribution of the machine learning problem. We then propose a new computation-ally efficient algorithm to stress the variables, which only reweights the reference observations and predictions. This makes our methodology scalable to large datasets. Results obtained on standard machine learning datasets are presented and discussed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset