Semantics for Global and Local Interpretation of Deep Neural Networks
Deep neural networks (DNNs) with high expressiveness have achieved state-of-the-art performance in many tasks. However, their distributed feature representations are difficult to interpret semantically. In this work, human-interpretable semantic concepts are associated with vectors in feature space. The association process is mathematically formulated as an optimization problem. The semantic vectors obtained from the optimal solution are applied to interpret deep neural networks globally and locally. The global interpretations are useful to understand the knowledge learned by DNNs. The interpretation of local behaviors can help to understand individual decisions made by DNNs better. The empirical experiments demonstrate how to use identified semantics to interpret the existing DNNs.
READ FULL TEXT