Renyi Differentially Private ADMM Based L1 Regularized Classification
In this paper we present two new algorithms, to solve the L1 regularized classification problems, satisfying Renyi differential privacy. Both algorithms are ADMM based, so that the empirical risk minimization and L1 regularization steps are separated into two optimization problems, at each iteration. We utilize the stochastic ADMM approach, and use the recent Renyi differential privacy (RDP) technique to privatize the training data. One algorithm achieves differential privacy by gradient perturbation, with privacy amplified by sub-sampling; the other algorithm achieves differential privacy by model perturbation, which calculates the sensitivity and perturbs the model after each training epoch. We compared the performance of our algorithms with several baseline algorithms, on both real and simulated datasets, and the experiment results show that, under high level of privacy preserving, the first algorithm performs well in classification, and the second algorithm performs well in feature selection when data contains many irrelevant attributes.
READ FULL TEXT