Client-Wise Targeted Backdoor in Federated Learning
Federated Learning (FL) emerges from the privacy concerns traditional machine learning raised. FL trains decentralized models by averaging them without compromising clients' datasets. Ongoing research has found that FL is also prone to security and privacy violations. Recent studies established that FL leaks information by exploiting inference attacks, reconstructing a data piece used during training, or extracting information. Additionally, poisoning attacks and backdoors corrupt FL security by inserting poisoned data into clients' datasets or directly modifying the model, degrading every client's model performance. Our proposal utilizes these attacks in combination for performing a client-wise targeted backdoor, where a single victim client is backdoored while the rest remains unaffected. Our results establish the viability of the presented attack, achieving a 100 label accuracy up to 0
READ FULL TEXT