Smoothed Analysis in Unsupervised Learning via Decoupling

11/29/2018
by   Aditya Bhaskara, et al.
12

Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis. While polynomial time smoothed analysis guarantees have been obtained for worst-case intractable problems like tensor decompositions and learning mixtures of Gaussians, such guarantees have been hard to obtain for several other important problems in unsupervised learning. A core technical challenge is obtaining lower bounds on the least singular value for random matrix ensembles with dependent entries, that are given by low-degree polynomials of a few base underlying random variables. In this work, we address this challenge by obtaining high-confidence lower bounds on the least singular value of new classes of structured random matrix ensembles of the above kind. We then use these bounds to obtain polynomial time smoothed analysis guarantees for the following three important problems in unsupervised learning: 1. Robust subspace recovery, when the fraction α of inliers in the d-dimensional subspace T ⊂R^n is at least α > (d/n)^ℓ for any constant integer ℓ>0. This contrasts with the known worst-case intractability when α< d/n, and the previous smoothed analysis result which needed α > d/n (Hardt and Moitra, 2013). 2. Higher order tensor decompositions, where we generalize the so-called FOOBI algorithm of Cardoso to find order-ℓ rank-one tensors in a subspace. This allows us to obtain polynomially robust decomposition algorithms for 2ℓ'th order tensors with rank O(n^ℓ). 3. Learning overcomplete hidden markov models, where the size of the state space is any polynomial in the dimension of the observations. This gives the first polynomial time guarantees for learning overcomplete HMMs in a smoothed analysis model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset