An Overview of Neural Network Compression

06/05/2020
by   James O'Neill, et al.
27

Overparameterized networks trained to convergence have shown impressive performance in domains such as computer vision and natural language processing. Pushing state of the art on salient tasks within these domains corresponds to these models becoming larger and more difficult for machine learning practitioners to use given the increasing memory and storage requirements, not to mention the larger carbon footprint. Thus, in recent years there has been a resurgence in model compression techniques, particularly for deep convolutional neural networks and self-attention based networks such as the Transformer. Hence, this paper provides a timely overview of both old and current compression techniques for deep neural networks, including pruning, quantization, tensor decomposition, knowledge distillation and combinations thereof. We assume a basic familiarity with deep learning architectures[%s], namely, Recurrent Neural Networks <cit.>, Convolutional Neural Networks <cit.> [%s] and Self-Attention based networks <cit.>[%s],[%s]. Most of the papers discussed are proposed in the context of at least one of these DNN architectures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset