Pyramid Vector Quantization and Bit Level Sparsity in Weights for Efficient Neural Networks Inference

11/24/2019
by   Vincenzo Liguori, et al.
0

This paper discusses three basic blocks for the inference of convolutional neural networks (CNNs). Pyramid Vector Quantization (PVQ) is discussed as an effective quantizer for CNNs weights resulting in highly sparse and compressible networks. Properties of PVQ are exploited for the elimination of multipliers during inference while maintaining high performance. The result is then extended to any other quantized weights. The Tiny Yolo v3 CNN is used to compare such basic blocks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset