A Highly Parallel FPGA Implementation of Sparse Neural Network Training

05/31/2018
by   Sourya Dey, et al.
0

We demonstrate an FPGA implementation of a parallel and reconfigurable architecture for sparse neural networks, capable of on-chip training and inference. The network connectivity uses pre-determined, structured sparsity to significantly lower memory and computational requirements. The architecture uses a notion of edge-processing and is highly pipelined and parallelized, decreasing training times. Moreover, the device can be reconfigured to trade off resource utilization with training time to fit networks and datasets of varying sizes. The overall effect is to reduce network complexity by more than 8x while maintaining high fidelity of inference results. This complexity reduction enables significantly greater exploration of network hyperparameters and structure. As proof of concept, we show implementation results on an Artix-7 FPGA.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset