Differentially Private Learning with Adaptive Clipping

05/09/2019
by   Om Thakkar, et al.
0

We introduce a new adaptive clipping technique for training learning models with user-level differential privacy that removes the need for extensive parameter tuning. Previous approaches to this problem use the Federated Stochastic Gradient Descent or the Federated Averaging algorithm with noised updates, and compute a differential privacy guarantee using the Moments Accountant. These approaches rely on choosing a norm bound for each user's update to the model, which needs to be tuned carefully. The best value depends on the learning rate, model architecture, number of passes made over each user's data, and possibly various other parameters. We show that adaptively setting the clipping norm applied to each user's update, based on a differentially private estimate of a target quantile of the distribution of unclipped norms, is sufficient to remove the need for such extensive parameter tuning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset