Sparse linear regression – CLuP achieves the ideal exact ML
In this paper we revisit one of the classical statistical problems, the so-called sparse maximum-likelihood (ML) linear regression. As a way of attacking this type of regression, we present a novel CLuP mechanism that to a degree relies on the Random Duality Theory (RDT) based algorithmic machinery that we recently introduced in <cit.>. After the initial success that the CLuP exhibited in achieving the exact ML performance while maintaining excellent computational complexity related properties in MIMO ML detection in <cit.>, one would naturally expect that a similar type of success can be achieved in other ML considerations. The results that we present here confirm that such an expectation is indeed reasonable. In particular, within the sparse regression context, the introduced CLuP mechanism indeed turns out to be able to achieve the ideal ML performance. Moreover, it can substantially outperform some of the most prominent earlier state of the art algorithmic concepts, among them even the variants of the famous LASSO and SOCP from <cit.>. Also, our recent results presented in <cit.> showed that the CLuP has excellent large-scale and the so-called rephasing abilities. Since such large-scale algorithmic features are possibly even more desirable within the sparse regression context we here also demonstrate that the basic CLuP ideas can be reformulated to enable solving with a relative ease the regression problems with several thousands of unknowns.
READ FULL TEXT