Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution

02/07/2022
by   Lam M. Nguyen, et al.
0

Deep neural networks (DNNs) have shown great success in many machine learning tasks. Their training is challenging since the loss surface of the network architecture is generally non-convex, or even non-smooth. How and under what assumptions is guaranteed convergence to a global minimum possible? We propose a reformulation of the minimization problem allowing for a new recursive algorithmic framework. By using bounded style assumptions, we prove convergence to an ε-(global) minimum using Õ(1/ε^3) gradient computations. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset