UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

03/27/2023
by   Vikram V. Ramaswamy, et al.
0

Concept-based explanations for convolutional neural networks (CNNs) aim to explain model behavior and outputs using a pre-defined set of semantic concepts (e.g., the model recognizes scene class “bedroom” based on the presence of concepts “bed” and “pillow”). However, they often do not faithfully (i.e., accurately) characterize the model's behavior and can be too complex for people to understand. Further, little is known about how faithful and understandable different explanation methods are, and how to control these two properties. In this work, we propose UFO, a unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations. UFO formalizes understandability and faithfulness as mathematical objectives and unifies most existing concept-based explanations methods for CNNs. Using UFO, we systematically investigate how explanations change as we turn the knobs of faithfulness and understandability. Our experiments demonstrate a faithfulness-vs-understandability tradeoff: increasing understandability reduces faithfulness. We also provide insights into the “disagreement problem” in explainable machine learning, by analyzing when and how concept-based explanations disagree with each other.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset