
Phasefield models are an important class of mathematical techniques for the description of a multitude of industryrelevant physical and technical processes. Examples are the modelling of cracks and fracture propagation in solid media like ceramics or dry soil (see Figure), the representation of liquid phase epitaxy for solar cells, semiconductors or LEDs as well as melting and solidification processes of alloys.


The ParaPhase Project




===








Motivation









Phasefield models are an important class of mathematical techniques for the description of a multitude of industryrelevant physical and technical processes. Examples are the modelling of cracks and fracture propagation in solid media like ceramics or dry soil (see figure below), the representation of liquid phase epitaxy for solar cells, semiconductors or LEDs as well as melting and solidification processes of alloys.






![bmbf2](/uploads/9923513916fe943ffad987f7852a5bb2/bmbf2.png)


![bmbf2](/uploads/9923513916fe943ffad987f7852a5bb2/bmbf2.png)






The price for the broad applicability and mathematical elegance of this approach is the significant computing cost required for the simulation of phasefield equations at large scales. Solutions of these equations typically contain sharp interfaces moving through the domain. Such structures can only be resolved with carefully tuned, adaptive discretization schemes in space and time. Even worse, many key phenomena start to emerge only when the simulation domain is large and the simulation time is long enough. For example, in order to simulate micro cracks leading to fatigue failure of a piece of machinery, the domain must contain a certain number of these cracks. For epitaxy, in turn, structures are normally described on nanoscales, while the specimen sizes are on the order of centimeters. Thus, the enormous number of degreesoffreedom for the discretization in space and time as well as the significant complexity of the simulation demand the use of modern HPC architectures.


The price for the broad applicability and mathematical elegance of this approach is the significant computing cost required for the simulation of phasefield equations at large scales. Solutions of these equations typically contain sharp interfaces moving through the domain. Such structures can only be resolved with carefully tuned, adaptive discretization schemes in space and time. Even worse, many key phenomena start to emerge only when the simulation domain is large and the simulation time is long enough. For example, in order to simulate micro cracks leading to fatigue failure of a piece of machinery, the domain must contain a certain number of these cracks. For epitaxy, in turn, structures are normally described on nanoscales, while the specimen sizes are on the order of centimeters. Thus, the enormous number of degreesoffreedom for the discretization in space and time as well as the significant complexity of the simulation demand the use of modern HPC architectures.






The goal of the BMBF project “ParaPhase  spacetime parallel adaptive simulation of phasefield models on HPC architectures” (FKZ 01IH15005A, BMBF program “[IKT 2020  Forschung für Innovation](https://www.bmbf.de/de/ikt2020forschungfuerinnovation854.html)") is the development of algorithms and methods that allow for highly efficient spacetime parallel and adaptive simulations of phasefield problems. Three key aspects will be addressed in the course of the project:


Goals









The goal of the BMBF project **ParaPhase  spacetime parallel adaptive simulation of phasefield models on HPC architectures** (FKZ 01IH15005A, BMBF program “[IKT 2020  Forschung für Innovation](https://www.bmbf.de/de/ikt2020forschungfuerinnovation854.html)") is the development of algorithms and methods that allow for highly efficient spacetime parallel and adaptive simulations of phasefield problems. Three key aspects will be addressed in the course of the project:






1. **Heterogeneous parallelization in space.** The adaptive phasefield multigrid algorithm TNNMG will be parallelized using loadbalanced decomposition techniques and GPUbased acceleration of the smoother.


1. **Heterogeneous parallelization in space.** The adaptive phasefield multigrid algorithm TNNMG will be parallelized using loadbalanced decomposition techniques and GPUbased acceleration of the smoother.


1. **Innovative parallelization in time.** For optimal parallel performance even on extreme scale platforms, novel approaches like Parareal and the “parallel full approximation scheme in space and time” for the parallelization in the temporal direction will be used, exploiting the hierarchical structures of spatial discretization and solver.


1. **Innovative parallelization in time.** For optimal parallel performance even on extreme scale platforms, novel approaches like Parareal and the “parallel full approximation scheme in space and time” for the parallelization in the temporal direction will be used, exploiting the hierarchical structures of spatial discretization and solver.


1. **Highorder methods in space and time.** To increase the arithmetic intensity, i.e., the ratio between computation and memory access, flexible highorder methods in space (using the Discontinuous Galerkin approach) and time (using spectral deferred corrections) will be implemented and combined.


1. **Highorder methods in space and time.** To increase the arithmetic intensity, i.e., the ratio between computation and memory access, flexible highorder methods in space (using the Discontinuous Galerkin approach) and time (using spectral deferred corrections) will be implemented and combined.








Consortium







The interdisciplinary consortium led by the Jülich Supercomputing Centre consists of two partners with a longstanding experience in the particular field of applications (the groups of [Heike Emmerich](http://www.mps.unibayreuth.de/de/team/Emmerich_Heike/) at Universität Bayreuth and [MarcAndre Keip](http://www.mechbau.unistuttgart.de/ls1/members/profs/keip/) at Universität Stuttgart) as well as four partners with a strong background in methods, algorithms and HPC (the groups of [Carsten Gräser](http://page.mi.fuberlin.de/graeser/) at FU Berlin, [Oliver Sander](http://www.math.tudresden.de/~osander/) at TU Dresden, [Robert Speck](http://www.fzjuelich.de/ias/jsc/speck_r) at JSC and Jiri Kraus from the [NVIDIA Application Lab](http://www.fzjuelich.de/ias/jsc/EN/Research/HPCTechnology/ExaScaleLabs/NVLAB/_node.html)). While the algorithms developed in this project will be primarily used for studying fracture propagation and liquid phase epitaxy, these problem classes already represent a wide range of challenges in industrial applications. Based on the open source software [DUNE](https://duneproject.org/), the “Distributed and Unified Numerics Environment”, the resulting algorithms will help to make largescale HPC simulations accessible for researchers in these fields.


The interdisciplinary consortium led by the Jülich Supercomputing Centre consists of two partners with a longstanding experience in the particular field of applications (the groups of [Heike Emmerich](http://www.mps.unibayreuth.de/de/team/Emmerich_Heike/) at Universität Bayreuth and [MarcAndre Keip](http://www.mechbau.unistuttgart.de/ls1/members/profs/keip/) at Universität Stuttgart) as well as four partners with a strong background in methods, algorithms and HPC (the groups of [Carsten Gräser](http://page.mi.fuberlin.de/graeser/) at FU Berlin, [Oliver Sander](http://www.math.tudresden.de/~osander/) at TU Dresden, [Robert Speck](http://www.fzjuelich.de/ias/jsc/speck_r) at JSC and Jiri Kraus from the [NVIDIA Application Lab](http://www.fzjuelich.de/ias/jsc/EN/Research/HPCTechnology/ExaScaleLabs/NVLAB/_node.html)). While the algorithms developed in this project will be primarily used for studying fracture propagation and liquid phase epitaxy, these problem classes already represent a wide range of challenges in industrial applications. Based on the open source software [DUNE](https://duneproject.org/), the “Distributed and Unified Numerics Environment”, the resulting algorithms will help to make largescale HPC simulations accessible for researchers in these fields.









...  @@ 18,5 +26,4 @@ With two post docs and three PhD students fully funded by BMBF, this work will a 
...  @@ 18,5 +26,4 @@ With two post docs and three PhD students fully funded by BMBF, this work will a 





  


  


::  ::


::  ::


 ![paraphaselogosmall](/uploads/4b38f8bc8371066b1883dc8821d2b7ad/paraphaselogosmall.png)  ![BMBF_CMYK_Gef_L_e](/uploads/f185e246b910f7f75e11ae4339adb214/BMBF_CMYK_Gef_L_e.png) 


 ![paraphaselogosmall](/uploads/4b38f8bc8371066b1883dc8821d2b7ad/paraphaselogosmall.png)  ![BMBF_CMYK_Gef_L_e](/uploads/f185e246b910f7f75e11ae4339adb214/BMBF_CMYK_Gef_L_e.png)  



\ No newline at end of file 