Local identifiability of l_1-minimization dictionary learning: a sufficient and almost necessary condition

05/17/2015
by   Siqi Wu, et al.
0

We study the theoretical properties of learning a dictionary from N signals x_i∈ R^K for i=1,...,N via l_1-minimization. We assume that x_i's are i.i.d. random linear combinations of the K columns from a complete (i.e., square and invertible) reference dictionary D_0 ∈ R^K× K. Here, the random linear coefficients are generated from either the s-sparse Gaussian model or the Bernoulli-Gaussian model. First, for the population case, we establish a sufficient and almost necessary condition for the reference dictionary D_0 to be locally identifiable, i.e., a local minimum of the expected l_1-norm objective function. Our condition covers both sparse and dense cases of the random linear coefficients and significantly improves the sufficient condition by Gribonval and Schnass (2010). In addition, we show that for a complete μ-coherent reference dictionary, i.e., a dictionary with absolute pairwise column inner-product at most μ∈[0,1), local identifiability holds even when the random linear coefficient vector has up to O(μ^-2) nonzeros on average. Moreover, our local identifiability results also translate to the finite sample case with high probability provided that the number of signals N scales as O(K K).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset