Sparse High-Dimensional Linear Regression. Algorithmic Barriers and a Local Search Algorithm

11/14/2017
by   David Gamarnik, et al.
0

We consider a sparse high dimensional regression model where the goal is to recover a k-sparse unknown vector β^* from n noisy linear observations of the form Y=Xβ^*+W ∈ R^n where X ∈ R^n × p has iid N(0,1) entries and W ∈ R^n has iid N(0,σ^2) entries. Under certain assumptions on the parameters, an intriguing assymptotic gap appears between the minimum value of n, call it n^*, for which the recovery is information theoretically possible, and the minimum value of n, call it n_alg, for which an efficient algorithm is known to provably recover β^*. In a recent paper it was conjectured that the gap is not artificial, in the sense that for sample sizes n ∈ [n^*,n_alg] the problem is algorithmically hard. We support this conjecture in two ways. Firstly, we show that a well known recovery mechanism called Basis Pursuit Denoising Scheme provably fails to ℓ_2-stably recover the vector when n ∈ [n^*,c n_alg], for some sufficiently small constant c>0. Secondly, we establish that n_alg, up to a multiplicative constant factor, is a phase transition point for the appearance of a certain Overlap Gap Property (OGP) over the space of k-sparse vectors. The presence of such an Overlap Gap Property phase transition, which originates in statistical physics, is known to provide evidence of an algorithmic hardness. Finally we show that if n>C n_alg for some large enough constant C>0, a very simple algorithm based on a local search improvement is able to infer correctly the support of the unknown vector β^*, adding it to the list of provably successful algorithms for the high dimensional linear regression problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset