LUTNet: speeding up deep neural network inferencing via look-up tables

05/25/2019
by   Chai Wah Wu, et al.
0

We consider the use of look-up tables (LUT) to speed up and simplify the hardware implementation of a deep learning network for inferencing after weights have been successfully trained. The use of LUT replaces the matrix multiply and add operations with a small number of LUTs and addition operations resulting in a multiplier-less implementation. We compare the different tradeoffs of this approach in terms of accuracy versus LUT size and the number of operations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset