A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements
We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r^3 κ^2 n n) random measurements of a positive semidefinite n × n matrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum.
READ FULL TEXT