Label-Smoothed Backdoor Attack

02/19/2022
by   Minlong Peng, et al.
3

By injecting a small number of poisoned samples into the training set, backdoor attacks aim to make the victim model produce designed outputs on any input injected with pre-designed backdoors. In order to achieve a high attack success rate using as few poisoned training samples as possible, most existing attack methods change the labels of the poisoned samples to the target class. This practice often results in severe over-fitting of the victim model over the backdoors, making the attack quite effective in output control but easier to be identified by human inspection or automatic defense algorithms. In this work, we proposed a label-smoothing strategy to overcome the over-fitting problem of these attack methods, obtaining a Label-Smoothed Backdoor Attack (LSBA). In the LSBA, the label of the poisoned sample x will be changed to the target class with a probability of p_n(x) instead of 100%, and the value of p_n(x) is specifically designed to make the prediction probability the target class be only slightly greater than those of the other classes. Empirical studies on several existing backdoor attacks show that our strategy can considerably improve the stealthiness of these attacks and, at the same time, achieve a high attack success rate. In addition, our strategy makes it able to manually control the prediction probability of the design output through manipulating the applied and activated number of LSBAs[Source code will be published at <https://github.com/v-mipeng/LabelSmoothedAttack.git>].

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset