Dynamic backdoor attacks against federated learning

11/15/2020
by   Anbu Huang, et al.
0

Federated Learning (FL) is a new machine learning framework, which enables millions of participants to collaboratively train machine learning model without compromising data privacy and security. Due to the independence and confidentiality of each client, FL does not guarantee that all clients are honest by design, which makes it vulnerable to adversarial attack naturally. In this paper, we focus on dynamic backdoor attacks under FL setting, where the goal of the adversary is to reduce the performance of the model on targeted tasks while maintaining a good performance on the main task, current existing studies are mainly focused on static backdoor attacks, that is the poison pattern injected is unchanged, however, FL is an online learning framework, and adversarial targets can be changed dynamically by attacker, traditional algorithms require learning a new targeted task from scratch, which could be computationally expensive and require a large number of adversarial training examples, to avoid this, we bridge meta-learning and backdoor attacks under FL setting, in which case we can learn a versatile model from previous experiences, and fast adapting to new adversarial tasks with a few of examples. We evaluate our algorithm on different datasets, and demonstrate that our algorithm can achieve good results with respect to dynamic backdoor attacks. To the best of our knowledge, this is the first paper that focus on dynamic backdoor attacks research under FL setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset