Differential Privacy in the Shuffle Model: A Survey of Separations

07/25/2021
by   Albert Cheu, et al.
0

Differential privacy is often studied in one of two models. In the central model, a single analyzer has the responsibility of performing a privacy-preserving computation on data. But in the local model, each data owner ensures their own privacy. Although it removes the need to trust the analyzer, local privacy comes at a price: a locally private protocol is less accurate than a centrally private counterpart when solving many learning and estimation problems. Protocols in the shuffle model are designed to attain the best of both worlds: recent work has shown high accuracy is possible with only a mild trust assumption. This survey paper gives an overview of novel shuffle protocols, along with lower bounds that establish the limits of the new model. We also summarize work that show the promise of interactivity in the shuffle model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset