Amplifying Rényi Differential Privacy via Shuffling

07/11/2019
by   Eloïse Berthier, et al.
0

Differential privacy is a useful tool to build machine learning models which do not release too much information about the training data. We study the Rényi differential privacy of stochastic gradient descent when each training example is sampled without replacement (also known as cyclic SGD). Cyclic SGD is typically faster than traditional SGD and is the algorithm of choice in large-scale implementations. We recover privacy guarantees for cyclic SGD which are competitive with those known for sampling with replacement. Our proof techniques make no assumptions on the model or on the data and are hence widely applicable.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset