Verifiable Differential Privacy For When The Curious Become Dishonest

08/18/2022
by   Ari Biswas, et al.
0

Many applications seek to produce differentially private statistics on sensitive data. Traditional approaches in the centralised model rely on a trusted aggregator to gather the raw data, aggregate statistics and introduce appropriate noise. Recent work has tried to relax the trust assumptions and reduce the need for trusted entities. However, such systems can trade off trust for increased noise and still require complete trust in some participants. Moreover, they do not prevent a malicious entity from introducing adversarial noise to skew the result or unmask some inputs. In this paper, we introduce the notion of “verifiable differential privacy with covert security”. The purpose is to ensure both privacy of the client's data and assurance that the output is not subject to any form of adversarial manipulation. The result is that everyone is assured that the noise used for differential privacy has been generated correctly, but no one can determine what the noise was. In the event of a malicious entity attempting to pervert the protocol, their actions will be detected with a constant probability negligibly close to one. We show that such verifiable privacy is practical and can be implemented at scale.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset