Inferential Privacy: From Impossibility to Database Privacy

03/14/2023
by   Sara Saeidian, et al.
0

We investigate the possibility of guaranteeing inferential privacy for mechanisms that release useful information about some data containing sensitive information, denoted by X. We describe a general model of utility and privacy in which utility is achieved by disclosing the value of low-entropy features of X, while privacy is maintained by keeping high-entropy features of X secret. Adopting this model, we prove that meaningful inferential privacy guarantees can be obtained, even though this is commonly considered to be impossible by the well-known result of Dwork and Naor. Then, we specifically discuss a privacy measure called pointwise maximal leakage (PML) whose guarantees are of the inferential type. We use PML to show that differential privacy admits an inferential formulation: it describes the information leaking about a single entry in a database assuming that every other entry is known, and considering the worst-case distribution on the data. Moreover, we define inferential instance privacy (IIP) as a bound on the (non-conditional) information leaking about a single entry in the database under the worst-case distribution, and show that it is equivalent to free-lunch privacy. Overall, our approach to privacy unifies, formalizes, and explains many existing ideas, e.g., why the informed adversary assumption may lead to underestimating the information leaking about each entry in the database. Furthermore, insights obtained from our results suggest general methods for improving privacy analyses; for example, we argue that smaller privacy parameters can be obtained by excluding low-entropy prior distributions from protection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset