Local Rollback for Resilient Mpi Applications With Application-Level Checkpointing and Message Logging
Use este enlace para citar
http://hdl.handle.net/2183/27584
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución-NoComercial-SinDerivadas 4.0 Internacional
Coleccións
- GI-GAC - Artigos [180]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Local Rollback for Resilient Mpi Applications With Application-Level Checkpointing and Message LoggingData
2019-02Cita bibliográfica
Nuria Losada, George Bosilca, Aurélien Bouteiller, Patricia González, María J. Martín, Local rollback for resilient MPI applications with application-level checkpointing and message logging, Future Generation Computer Systems, Volume 91, 2019, Pages 450-464, ISSN 0167-739X, https://doi.org/10.1016/j.future.2018.09.041.
Resumo
[Abstract]
The resilience approach generally used in high-performance computing (HPC) relies on coordinated checkpoint/restart, a global rollback of all the processes that are running the application. However, in many instances, the failure has a more localized scope and its impact is usually restricted to a subset of the resources being used. Thus, a global rollback would result in unnecessary overhead and energy consumption, since all processes, including those unaffected by the failure, discard their state and roll back to the last checkpoint to repeat computations that were already done. The User Level Failure Mitigation (ULFM) interface – the last proposal for the inclusion of resilience features in the Message Passing Interface (MPI) standard – enables the deployment of more flexible recovery strategies, including localized recovery. This work proposes a local rollback approach that can be generally applied to Single Program, Multiple Data (SPMD) applications by combining ULFM, the ComPiler for Portable Checkpointing (CPPC) tool, and the Open MPI VProtocol system-level message logging component. Only failed processes are recovered from the last checkpoint, while consistency before further progress in the execution is achieved through a two-level message logging process. To further optimize this approach point-to-point communications are logged by the Open MPI VProtocol component, while collective communications are optimally logged at the application level—thereby decoupling the logging protocol from the particular collective implementation. This spatially coordinated protocol applied by CPPC reduces the log size, the log memory requirements and overall the resilience impact on the applications.
Palabras chave
MPI
Resilience
Message logging
Application-level checkpointing
Local rollback
Resilience
Message logging
Application-level checkpointing
Local rollback
Versión do editor
Dereitos
Atribución-NoComercial-SinDerivadas 4.0 Internacional
ISSN
0167-739X
1872-7115
1872-7115