Quantification of the Leakage in Federated Learning

10/12/2019
by   Zhaorui Li, et al.
0

With the growing emphasis on users' privacy, federated learning has become more and more popular. Many architectures have been raised for a better security. Most architecture work on the assumption that data's gradient could not leak information. However, some work, recently, has shown such gradients may lead to leakage of the training data. In this paper, we discuss the leakage based on a federated approximated logistic regression model and show that such gradient's leakage could leak the complete training data if all elements of the inputs are either 0 or 1.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset