Contrastive Reasoning in Neural Networks

03/23/2021
by   Mohit Prabhushankar, et al.
20

Neural networks represent data as projections on trained weights in a high dimensional manifold. The trained weights act as a knowledge base consisting of causal class dependencies. Inference built on features that identify these dependencies is termed as feed-forward inference. Such inference mechanisms are justified based on classical cause-to-effect inductive reasoning models. Inductive reasoning based feed-forward inference is widely used due to its mathematical simplicity and operational ease. Nevertheless, feed-forward models do not generalize well to untrained situations. To alleviate this generalization challenge, we propose using an effect-to-cause inference model that reasons abductively. Here, the features represent the change from existing weight dependencies given a certain effect. We term this change as contrast and the ensuing reasoning mechanism as contrastive reasoning. In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast. We demonstrate the value of contrastive reasoning in two stages of a neural network's reasoning pipeline : in inferring and visually explaining decisions for the application of object recognition. We illustrate the value of contrastively recognizing images under distortions by reporting an improvement of 3.47 accuracy under the proposed contrastive framework on CIFAR-10C, noisy STL-10, and VisDA datasets respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset