Understanding Deep Architectures by Interpretable Visual Summaries

01/27/2018
by   Marco Carletti, et al.
0

A consistent body of research investigates the recurrent visual patterns exploited by deep networks for object classification with the help of diverse visualization techniques. Unfortunately, no effort has been spent in showing that these techniques are effective in leading researchers to univocal and exhaustive explanations. This paper goes in this direction, presenting a visualization framework owing to a group of clusters or summaries, each one formed by crisp image regions focusing on a particular part that the network has exploited with high regularity to classify a given class. In most of the cases, these parts carry a semantic meaning, making the explanation simple and universal. For example, the method suggests that AlexNet, when classifying the ImageNet class "robin", is very sensible to the patterns of the head, the body, the legs, the wings and the tail, providing five summaries where these parts are consistently highlighted. The approach is composed by a sparse optimization step providing sharp image masks whose perturbation causes high loss in the classification. Regions composing the masks are then clustered together by means of a proposal flow-based similarity score, that associates visually similar patterns of diverse objects which are in corresponding positions. The final clusters are visual summaries easy to be interpreted, as found by the very first user study of this kind. The summaries can be also used to compare different architectures: for example, the superiority of GoogleNet w.r.t. AlexNet is explained by our approach since the former gives rise to more summaries, indicating its ability in capturing a higher number of diverse semantic parts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset