DP-ADMM: ADMM-based Distributed Learning with Differential Privacy

08/30/2018
by   Zonghao Huang, et al.
0

Distributed machine learning is making great changes in a wide variety of domains but also brings privacy risk from the exchanged information during the learning process. This paper focuses on a class of regularized empirical risk minimization problems, and develops a privacy-preserving distributed learning algorithm. We use Alternating Direction Method of Multipliers (ADMM) to decentralize the learning algorithm, and apply Gaussian mechanisms locally to guarantee differential privacy. However, simply combining ADMM and local randomization mechanisms would result in an unconvergent algorithm with bad performance, especially when the introduced noise is large to guarantee a low total privacy loss. Besides, this approach cannot be applied to the learning problems with non-smooth objective functions. To figure out these concerns, we propose an improved ADMM-based differentially private distributed learning algorithm: DP-ADMM, where an approximate augmented Lagrangian function and Gaussian mechanisms with time-varying variance are utilized. We also apply the moment accountant method to bound the total privacy loss. Our theoretical analysis proves that DP-ADMM can be applied to a general class of convex learning problems, provides differential privacy guarantee, and achieves an O(1/√(t)) rate of convergence, where t is the number of iterations. Our evaluations demonstrate that our approach can achieve good accuracy and effectiveness even with a low total privacy leakage.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset