Progressive Deep Neural Networks Acceleration via Soft Filter Pruning

08/22/2018
by   Yang He, et al.
0

This paper proposed a Progressive Soft Filter Pruning method (PSFP) to prune the filters of deep Neural Networks which can thus be accelerated in the inference. Specifically, the proposed PSFP method prunes the network progressively and enables the pruned filters to be updated when training the model after pruning. PSFP has three advantages over previous works: 1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. 2) Less dependence on the pre-trained model. Large capacity enables our method to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, PSFP from scratch outperforms the previous filter pruning methods. 3) Pruning the neural network progressively makes the selection of low-norm filters much more stable, which has a potential to get a better performance. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, our method reduces more than 42 0.2 ResNet-50, our progressive pruning method have 1.08 over the pruning method without progressive pruning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset