Robust Decentralized Learning for Neural Networks
In decentralized learning, data is distributed among local clients which collaboratively train a shared prediction model using secure aggregation. To preserve the privacy of the clients, modern decentralized learning paradigms require each client to maintain a private local training data set and only upload their summarized model updates to the server. However, this can quickly lead to a degenerate model and collapse in performance when corrupted updates (e.g., adversarial manipulations) are aggregated at the server. In this work, we present a robust decentralized learning framework, Decent_BVA, using bias-variance based adversarial training via asymmetrical communications between each client and the server. The experiments are conducted on neural networks with cross-entropy loss. Nevertheless, the proposed framework allows the use of various classification loss functions (e.g., cross-entropy loss, mean squared error loss) where the gradients of the bias and variance are tractable to be estimated from local clients' models. In this case, any gradient-based adversarial training strategies could be used by taking the bias-variance oriented adversarial examples into consideration, e.g., bias-variance based FGSM and PGD proposed in this paper. Experiments show that Decent_BVA is robust to the classical adversarial attacks when the level of corruption is high while being competitive compared with conventional decentralized learning in terms of the model's accuracy and efficiency.
READ FULL TEXT