Backdoor Attacks in Peer-to-Peer Federated Learning
We study backdoor attacks in peer-to-peer federated learning systems on different graph topologies and datasets. We show that only 5 are sufficient to perform a backdoor attack with 42 decreasing the accuracy on clean data by more than 2 the attack can be amplified by the attacker crashing a small number of nodes. We evaluate defenses proposed in the context of centralized federated learning and show they are ineffective in peer-to-peer settings. Finally, we propose a defense that mitigates the attacks by applying different clipping norms to the model updates received from peers and local model trained by a node.
READ FULL TEXT