Distributed-Memory Randomized Algorithms for Sparse Tensor CP Decomposition

10/11/2022
by   Vivek Bharadwaj, et al.
0

Low-rank Candecomp / PARAFAC (CP) Decomposition is a powerful tool for the analysis of sparse tensors, which can represent diverse datasets involving discrete-valued variables. Given a sparse tensor, producing a low-rank CP decomposition is computation and memory intensive, typically involving several large, structured linear least-squares problems. Several recent works have provided randomized sketching methods to reduce the cost of these least squares problems, along with shared-memory prototypes of their algorithms. Unfortunately, these prototypes are slow compared to optimized non-randomized tensor decomposition. Furthermore, they do not scale to tensors that exceed the memory capacity of a single shared-memory device. We extend randomized algorithms for CP decomposition to the distributed-memory setting and provide high-performance implementations competitive with state-of-the-art non-randomized libraries. These algorithms sample from a distribution of statistical leverage scores to reduce the cost of repeated least squares solves required in the tensor decomposition. We show how to efficiently sample from an approximate leverage score distribution of the left-hand side of each linear system when the CP factor matrices are distributed by block rows among processors. In contrast to earlier works that only communicate dense factor matrices in a Cartesian topology between processors, we use sampling to avoid expensive reduce-scatter collectives by communicating selected nonzeros from the sparse tensor and a small subset of factor matrix rows. On the CPU partition of the NERSC Cray EX supercomputer Perlmutter, our high-performance implementations require just seconds to compute low-rank approximations of real-world sparse tensors with billions of nonzeros.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset