Does Symbolic Knowledge Prevent Adversarial Fooling?

12/19/2019
by   Stefano Teso, et al.
0

Arguments in favor of injecting symbolic knowledge into neural architectures abound. When done right, constraining a sub-symbolic model can substantially improve its performance and sample complexity and prevent it from predicting invalid configurations. Focusing on deep probabilistic (logical) graphical models – i.e., constrained joint distributions whose parameters are determined (in part) by neural nets based on low-level inputs – we draw attention to an elementary but unintended consequence of symbolic knowledge: that the resulting constraints can propagate the negative effects of adversarial examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset