Newton Method with Variable Selection by the Proximal Gradient Method

11/30/2022
by   Ryosuke Shimmura, et al.
0

In sparse estimation, in which the sum of the loss function and the regularization term is minimized, methods such as the proximal gradient method and the proximal Newton method are applied. The former is slow to converge to a solution, while the latter converges quickly but is inefficient for problems such as group lasso problems. In this paper, we examine how to efficiently find a solution by finding the convergence destination of the proximal gradient method. However, the case in which the Lipschitz constant of the derivative of the loss function is unknown has not been studied theoretically, and only the Newton method has been proposed for the case in which the Lipschitz constant is known. We show that the Newton method converges when the Lipschitz constant is unknown and extend the theory. Furthermore, we propose a new quasi-Newton method that avoids Hessian calculations and improves efficiency, and we prove that it converges quickly, providing a theoretical guarantee. Finally, numerical experiments show that the proposed method can significantly improve the efficiency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset