Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints

05/21/2018
by   Yoojin Choi, et al.
0

We consider the optimization of deep convolutional neural networks (CNNs) such that they provide good performance while having reduced complexity if deployed on either conventional systems utilizing spatial-domain convolution or lower complexity systems designed for Winograd convolution. Furthermore, we explore the universal quantization and compression of these networks. In particular, the proposed framework produces one compressed model whose convolutional filters are sparse not only in the spatial domain but also in the Winograd domain. Hence, one compressed model can be deployed universally on any platform, without need for re-training on the deployed platform, and the sparsity of its convolutional filters can be exploited for further complexity reduction in either domain. To get a better compression ratio, the sparse model is compressed in the spatial domain which has a less number of parameters. From our experiments, we obtain 24.2×, 47.7× and 35.4× compressed models for ResNet-18, AlexNet and CT-SRCNN, while their computational complexity is also reduced by 4.5×, 5.1× and 23.5×, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset