On the overlooked issue of defining explanation objectives for local-surrogate explainers

06/10/2021
by   Rafael Poyiadzi, et al.
3

Local surrogate approaches for explaining machine learning model predictions have appealing properties, such as being model-agnostic and flexible in their modelling. Several methods exist that fit this description and share this goal. However, despite their shared overall procedure, they set out different objectives, extract different information from the black-box, and consequently produce diverse explanations, that are – in general – incomparable. In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation. We discuss the implications of the lack of agreement, and clarity, amongst the methods' objectives on the research and practice of explainability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset