Phase-field models are an important class of mathematical techniques for the description of a multitude of industry-relevant physical and technical processes. Examples are the modelling of cracks and fracture propagation in solid media like ceramics or dry soil (see figure below), the representation of liquid phase epitaxy for solar cells, semi-conductors or LEDs as well as melting and solidification processes of alloys.
The price for the broad applicability and mathematical elegance of this approach is the significant computing cost required for the simulation of phase-field equations at large scales. Solutions of these equations typically contain sharp interfaces moving through the domain. Such structures can only be resolved with carefully tuned, adaptive discretization schemes in space and time. Even worse, many key phenomena start to emerge only when the simulation domain is large and the simulation time is long enough. For example, in order to simulate micro cracks leading to fatigue failure of a piece of machinery, the domain must contain a certain number of these cracks. For epitaxy, in turn, structures are normally described on nano-scales, while the specimen sizes are on the order of centimeters. Thus, the enormous number of degrees-of-freedom for the discretization in space and time as well as the significant complexity of the simulation demand the use of modern HPC architectures.
The goal of the BMBF project ParaPhase - space-time parallel adaptive simulation of phase-field models on HPC architectures (FKZ 01IH15005A, BMBF program “IKT 2020 - Forschung für Innovation") is the development of algorithms and methods that allow for highly efficient space-time parallel and adaptive simulations of phase-field problems. Three key aspects will be addressed in the course of the project:
Heterogeneous parallelization in space. The adaptive phase-field multigrid algorithm TNNMG will be parallelized using load-balanced decomposition techniques and GPU-based acceleration of the smoother.
Innovative parallelization in time. For optimal parallel performance even on extreme scale platforms, novel approaches like Parareal and the “parallel full approximation scheme in space and time” for the parallelization in the temporal direction will be used, exploiting the hierarchical structures of spatial discretization and solver.
High-order methods in space and time. To increase the arithmetic intensity, i.e., the ratio between computation and memory access, flexible high-order methods in space (using the Discontinuous Galerkin approach) and time (using spectral deferred corrections) will be implemented and combined.
The interdisciplinary consortium led by the Jülich Supercomputing Centre consists of two partners with a long-standing experience in the particular field of applications (the groups of Uwe Glatzel at Universität Bayreuth and Marc-Andre Keip at Universität Stuttgart) as well as four partners with a strong background in methods, algorithms and HPC (the groups of Carsten Gräser at FU Berlin, Oliver Sander at TU Dresden, Robert Speck at JSC and Jiri Kraus from the NVIDIA Application Lab). While the algorithms developed in this project will be primarily used for studying fracture propagation and liquid phase epitaxy, these problem classes already represent a wide range of challenges in industrial applications. Based on the open source software DUNE, the “Distributed and Unified Numerics Environment”, the resulting algorithms will help to make large-scale HPC simulations accessible for researchers in these fields.
With two post docs and three PhD students fully funded by BMBF, this work will also contribute to the educational effort in the field of computational science and engineering, enabling young scientists to address challenges of real-world applications with HPC-ready methods and algorithms.