Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent

01/15/2020
by   Satoshi Takabe, et al.
0

Deep unfolding is a promising deep-learning technique in which an iterative algorithm is unrolled to a deep network architecture with trainable parameters. In the case of gradient descent algorithms, as a result of training process, one often observes acceleration of convergence speed with learned non-constant step size parameters whose behavior is not intuitive nor interpretable from conventional theory. In this paper, we provide theoretical interpretation of learned step size of deep-unfolded gradient descent (DUGD). We first prove that training processes of DUGD reduces not only the mean squared error loss but also the spectral radius regarding the convergence rate. Next, it is shown that minimizing an upper bound of the spectral radius naturally leads to the Chebyshev step which is a sequence of the step size based on Chebyshev polynomials. Numerical experiments confirm that Chebyshev steps qualitatively reproduce the learned step size parameters in DUGD, which provides a plausible interpretation of learned parameters. In addition, it is shown that Chebyshev steps achieve the lower bound of the convergence rate of the first order method in a specific limit without learning nor momentum terms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset