Implicit Regularization in Matrix Sensing: A Geometric View Leads to Stronger Results

08/27/2020
by   Armin Eftekhari, et al.
0

We may think of low-rank matrix sensing as a learning problem with infinitely many possible outcomes where the desired learning outcome is the unique low-rank matrix that agrees with the available data. To find this desired low-rank matrix, for example, nuclear norm explicitly biases the learning outcome towards being low-rank. In contrast, we establish here that simple gradient flow of the feasibility gap is implicitly biased towards low-rank outcomes and successfully solves the matrix sensing problem, provided that the initialization is low-rank and sufficiently close to the manifold of feasible outcomes. Compared to the state of the art, the geometric perspective adopted here leads to substantially stronger results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset