Proximal Gradient Method for Manifold Optimization
This paper considers manifold optimization problems with nonsmooth and nonconvex objective function. Existing methods for solving this kind of problems can be classified into two classes. Algorithms in the first class rely on information of the subgradients of the objective function, which leads to slow convergence rate. Algorithms in the second class are based on operator-splitting techniques, but they usually lack rigorous convergence guarantees. In this paper, we propose a retraction-based proximal gradient method for solving this class of problems. We prove that the proposed method globally converges to a stationary point. Iteration complexity for obtaining an ϵ-stationary solution is also analyzed. Numerical results on solving sparse PCA and compressed modes problems are reported to demonstrate the advantages of the proposed method.
READ FULL TEXT