Bounding Training Data Reconstruction in Private (Deep) Learning

01/28/2022
by   Chuan Guo, et al.
0

Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods – Renyi differential privacy and Fisher information leakage – both offer strong semantic protection against data reconstruction attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset