Embedding Differentiable Sparsity into Deep Neural Network

06/23/2020
by   Yongjin Lee, et al.
0

In this paper, we propose embedding sparsity into the structure of deep neural networks, where model parameters can be exactly zero during training with the stochastic gradient descent. Thus, it can learn the sparsified structure and the weights of networks simultaneously. The proposed approach can learn structured as well as unstructured sparsity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset