Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

06/04/2018
by   Vahid Behzadan, et al.
0

Recent developments have established the vulnerability of deep reinforcement learning to policy manipulation attacks via intentionally perturbed inputs, known as adversarial examples. In this work, we propose a technique for mitigation of such attacks based on addition of noise to the parameter space of deep reinforcement learners during training. We experimentally verify the effect of parameter-space noise in reducing the transferability of adversarial examples, and demonstrate the promising performance of this technique in mitigating the impact of whitebox and blackbox attacks at both test and training times.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset