Orthogonal layers of parallelism in large-scale eigenvalue computations

09/05/2022
by   Andreas Alvermann, et al.
0

We address the communication overhead of distributed sparse matrix-(multiple)-vector multiplication in the context of large-scale eigensolvers, using filter diagonalization as an example. The basis of our study is a performance model which includes a communication metric that is computed directly from the matrix sparsity pattern without running any code. The performance model quantifies to which extent scalability and parallel efficiency are lost due to communication overhead. To restore scalability, we identify two orthogonal layers of parallelism in the filter diagonalization technique. In the horizontal layer the rows of the sparse matrix are distributed across individual processes. In the vertical layer bundles of multiple vectors are distributed across separate process groups. An analysis in terms of the communication metric predicts that scalability can be restored if, and only if, one implements the two orthogonal layers of parallelism via different distributed vector layouts. Our theoretical analysis is corroborated by benchmarks for application matrices from quantum and solid state physics. We finally demonstrate the benefits of using orthogonal layers of parallelism with two exemplary application cases – an exciton and a strongly correlated electron system – which incur either small or large communication overhead.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset