DeepCleanse: Input Sanitization Framework Against Trojan Attacks on Deep Neural Network Systems

08/09/2019
by   Bao Gia Doan, et al.
0

Doubts over safety and trustworthiness of deep learning systems have emerged from recent Trojan attacks. These attack variants are both highly potent and insidious. An attacker exploits a backdoor crafted in the neural network using a secret trigger, a Trojan, applied to an input to alter the model decision of any input to a target prediction determined and only known to the attacker. Detecting trojan attacks, especially at run-time, is challenging: i) the Trojan triggers can be unoticeable to humans; ii) the trigger is a secret guarded and known only by the attacker; iii) the stealthy and malicious behavior is only evident when the secret trigger is activated by an attacker; and iv) interpreting a learned model to identify a Trojan is a difficult task. In this paper, we propose a novel idea to neutralize backdoor attacks by sanitizing inputs to Deep Neural Networks in the input domain of vision tasks called DeepCleanse. We sanitize the incoming input, by analyzing the side channel information inevitably leaked in the model interpretability of a potentially trojaned model, extracting the potentially affected areas and subsequently using an inpainting method we propose for restoring the input for the classification task. Through extensive experiments, using two popular datasets (CIFAR10 and GTSRB) we demonstrate the efficacy of DeepCleanse against backdoor attacks with the dramatic reductions in the attack success rates of backdoor attack; from 100 benign or sanitized Trojan inputs. To the best of our knowledge, this is the first defense method capable of sanitizing trojaned inputs; in particular, our approach neither requires ground-truth label data nor anomaly detection methods for accurate detection of a Trojan.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset