Private and polynomial time algorithms for learning Gaussians and beyond

11/22/2021
by   Hassan Ashtiani, et al.
0

We present a fairly general framework for reducing (ε, δ) differentially private (DP) statistical estimation to its non-private counterpart. As the main application of this framework, we give a polynomial time and (ε,δ)-DP algorithm for learning (unrestricted) Gaussian distributions in ℝ^d. The sample complexity of our approach for learning the Gaussian up to total variation distance α is O(d^2/α^2+d^2 √(ln1/δ)/αε), matching (up to logarithmic factors) the best known information-theoretic (non-efficient) sample complexity upper bound of Aden-Ali, Ashtiani, Kamath (ALT'21). In an independent work, Kamath, Mouzakis, Singhal, Steinke, and Ullman (arXiv:2111.04609) proved a similar result using a different approach and with O(d^5/2) sample complexity dependence on d. As another application of our framework, we provide the first polynomial time (ε, δ)-DP algorithm for robust learning of (unrestricted) Gaussians.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset