Complete Dictionary Recovery over the Sphere
We consider the problem of recovering a complete (i.e., square and invertible) matrix A_0, from Y ∈ R^n × p with Y = A_0 X_0, provided X_0 is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers A_0 when X_0 has O(n) nonzeros per column, under suitable probability model for X_0. In contrast, prior results based on efficient algorithms provide recovery guarantees when X_0 has only O(n^1-δ) nonzeros per column for any constant δ∈ (0, 1). Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we first provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no "spurious" local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the presence of saddle points. The geometric approach we develop here may also shed light on other problems arising from nonconvex recovery of structured signals.
READ FULL TEXT