The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

03/27/2023
by   Joshua C. Zhao, et al.
0

Secure aggregation promises a heightened level of privacy in federated learning, maintaining that a server only has access to a decrypted aggregate update. Within this setting, linear layer leakage methods are the only data reconstruction attacks able to scale and achieve a high leakage rate regardless of the number of clients or batch size. This is done through increasing the size of an injected fully-connected (FC) layer. However, this results in a resource overhead which grows larger with an increasing number of clients. We show that this resource overhead is caused by an incorrect perspective in all prior work that treats an attack on an aggregate update in the same way as an individual update with a larger batch size. Instead, by attacking the update from the perspective that aggregation is combining multiple individual updates, this allows the application of sparsity to alleviate resource overhead. We show that the use of sparsity can decrease the model size overhead by over 327× and the computation time by 3.34× compared to SOTA while maintaining equivalent total leakage rate, 77 aggregation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset