A Novel Approximate Hamming Weight Computing for Spiking Neural Networks: an FPGA Friendly Architecture
Hamming weights of sparse and long binary vectors are important modules in many scientific applications, particularly in spiking neural networks that are of our interest. To improve both area and latency of their FPGA implementations, we propose a method inspired from synaptic transmission failure for exploiting FPGA lookup tables to compress long input vectors. To evaluate the effectiveness of this approach, we count the number of `1's of the compressed vector using a simple linear adder. We classify the compressors into shallow ones with up to two levels of lookup tables and deep ones with more than two levels. The architecture generated by this approach shows up to 82 and 35 and latency respectively. Moreover, our simulation results show that calculating the Hamming weight of a 1024-bit vector of a spiking neural network by the use of only deep compressors preserves the chaotic behavior of the network while slightly impacts on the learning performance.
READ FULL TEXT