Logic-inspired Deep Neural Networks

11/20/2019
by   Minh Le, et al.
0

Deep neural networks have achieved impressive performance and become de-facto standard in many tasks. However, phenomena such as adversarial examples and fooling examples hint that the generalization they make is flawed. We argue that the problem roots in their distributed and connected nature and propose remedies inspired by propositional logic. Our experiments show that the proposed models are more local and better at resisting fooling and adversarial examples. By means of an ablation analysis, we reveal insights into adversarial examples and suggest a new hypothesis on their origins.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset