Does the ℓ_1-norm Learn a Sparse Graph under Laplacian Constrained Graphical Models?

06/26/2020
by   Jiaxi Ying, et al.
0

We consider the problem of learning a sparse graph under Laplacian constrained Gaussian graphical models. This problem can be formulated as a penalized maximum likelihood estimation of the precision matrix under Laplacian structural constraints. Like in the classical graphical lasso problem, recent works made use of the ℓ_1-norm regularization with the goal of promoting sparsity in Laplacian structural precision matrix estimation. However, we find that the widely used ℓ_1-norm is not effective in imposing a sparse solution in this problem. Through empirical evidence, we observe that the number of nonzero graph weights grows with the increase of the regularization parameter. From a theoretical perspective, we prove that a large regularization parameter will surprisingly lead to a fully connected graph. To address this issue, we propose a nonconvex estimation method by solving a sequence of weighted ℓ_1-norm penalized sub-problems and prove that the statistical error of the proposed estimator matches the minimax lower bound. To solve each sub-problem, we develop a projected gradient descent algorithm that enjoys a linear convergence rate. Numerical experiments involving synthetic and real-world data sets from the recent COVID-19 pandemic and financial stock markets demonstrate the effectiveness of the proposed method. An open source 𝖱 package containing the code for all the experiments is available at https://github.com/mirca/sparseGraph.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset