GPU Tensor Cores for fast Arithmetic Reductions

01/15/2020
by   Cristóbal A. Navarro, et al.
0

This work proposes a GPU tensor core approach that encodes the arithmetic reduction of n numbers as a set of chained m × m matrix multiply accumulate (MMA) operations executed in parallel by GPU tensor cores. The asymptotic running time of the proposed chained tensor core approach is T(n)=5 log_m^2n and its speedup is S=45 log_2m^2 over the classic O(n log n) parallel reduction algorithm. Experimental performance results show that the proposed reduction method is ∼ 3.2 × faster than a conventional GPU reduction implementation, and preserves the numerical precision because the sub-results of each chain of R MMAs is kept as a 32-bit floating point value, before being all reduced into as a final 32-bit result. The chained MMA design allows a flexible configuration of thread-blocks; small thread-blocks of 32 or 128 threads can still achieve maximum performance using a chain of R=4,5 MMAs per block, while large thread-blocks work best with R=1. The results obtained in this work show that tensor cores can indeed provide a significant performance improvement to non-Machine Learning applications such as the arithmetic reduction, which is an integration tool for studying many scientific phenomena.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset