Gundolf Haase and Michael KuhnWe start our parallelization of iterative methods on the basis of a non-overlapping domain decomposition. The parallelization of the conjugate gradient method on basis of this data distribution is folklore. The same holds for parallel geometrical multigrid algorithms.
The situation is different for algebraic multigrid, even if we start with a locally stored stiffness matrix such that the sum would return the global stiffness matrix (called distributed matrix). The classical AMG coarsening process together with the Galerkin approach for calculating the coarse stiffness matrices would destroy the locality, i.e., the coarse matrices are no longer distributed matrices. This straight forward AMG parallelization would cause a different parallel storage of the coarse matrices and would require communication in interpolation/restriction, matrix-times-vector multiplication on the coarser grids.
Our target is to construct an AMG coarsening such that the intergrid transfer requires no communication at all and that all coarse matrices are distributed matrices, i.e., there is no communication in the matrix-times-vector operation. This can be achieved by using the results from a general theory on admissible parallel matrix operations as construction principle for the coarsening algorithm. The following parallel multigrid iteration is the same as in a geometrical multigrid procedure. The resulting DD-AMG algorithm is not restricted to non-overlapping data distributions, the principle can be immediately applied to overlapping data distributions.
We applied this strategy successfully in the parallelization of
the originally sequential AMG code
PEBBLES (Patch and Element Based gray Box Linear Equation Solver).
Especially, only very small changes in the existing code
had been necessary.
The numerical examples in solving 3D Maxwell equations via
this parallel AMG
show the nice behavior of the code
and proof the efficiency of our
parallelization strategy from a manpower/performance viewpoint.