A Tunable Measure for Information Leakage

06/08/2018
by   Jiachun Liao, et al.
0

A tunable measure for information leakage called maximal α-leakage is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset. The choice of α determines the specific adversarial action ranging from refining a belief for α =1 to guessing the best posterior for α = ∞, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For all other α this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasi-convexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset