Robust estimation via generalized quasi-gradients

05/28/2020
by   Banghua Zhu, et al.
0

We explore why many recently proposed robust estimation problems are efficiently solvable, even though the underlying optimization problems are non-convex. We study the loss landscape of these robust estimation problems, and identify the existence of "generalized quasi-gradients". Whenever these quasi-gradients exist, a large family of low-regret algorithms are guaranteed to approximate the global minimum; this includes the commonly-used filtering algorithm. For robust mean estimation of distributions under bounded covariance, we show that any first-order stationary point of the associated optimization problem is an approximate global minimum if and only if the corruption level ϵ < 1/3. Consequently, any optimization algorithm that aproaches a stationary point yields an efficient robust estimator with breakdown point 1/3. With careful initialization and step size, we improve this to 1/2, which is optimal. For other tasks, including linear regression and joint mean and covariance estimation, the loss landscape is more rugged: there are stationary points arbitrarily far from the global minimum. Nevertheless, we show that generalized quasi-gradients exist and construct efficient algorithms. These algorithms are simpler than previous ones in the literature, and for linear regression we improve the estimation error from O(√(ϵ)) to the optimal rate of O(ϵ) for small ϵ assuming certified hypercontractivity. For mean estimation with near-identity covariance, we show that a simple gradient descent algorithm achieves breakdown point 1/3 and iteration complexity Õ(d/ϵ^2).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset