Modified Regularized Dual Averaging Method for Training Sparse Convolutional Neural Networks

07/11/2018
by   Xiaodong Jia, et al.
0

We proposed a modified regularized dual averaging method for training sparse deep convolutional neural networks. The regularized dual averaging method has been proven to be effective in obtaining sparse solutions in convex optimization problems, but not applied to deep learning fields before. We analyzed the new version in convex conditions and prove the convergence of it. The modified method can obtain more sparse solutions than traditional sparse optimization methods such as proximal-SGD, while keeping almost the same accuracy as stochastic gradient method with momentum on certain datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset