Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- posterSeptember 2015
Performance Evaluation of OpenFOAM* with MPI-3 RMA Routines on Intel® Xeon® Processors and Intel® Xeon Phi™ Coprocessors
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 18, Pages 1–2https://doi.org/10.1145/2802658.2802676OpenFOAM is a software package for solving partial differential equations and is very popular for computational fluid dynamics in the automotive segment. In this work, we describe our evaluation of the performance of OpenFOAM with MPI-3 Remote Memory ...
- posterSeptember 2015
Correctness Analysis of MPI-3 Non-Blocking Communications in PARCOACH
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 16, Pages 1–2https://doi.org/10.1145/2802658.2802674MPI-3 provide functions for non-blocking collectives. To help programmers introduce non-blocking collectives to existing MPI programs, we improve the PARCOACH tool for checking correctness of MPI call sequences. These enhancements focus on correct call ...
- research-articleSeptember 2015
GPU-Aware Design, Implementation, and Evaluation of Non-blocking Collective Benchmarks
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 9, Pages 1–10https://doi.org/10.1145/2802658.2802672As we move towards efficient exascale systems, heterogeneous accelerators like NVIDIA GPUs are becoming a significant compute component of modern HPC clusters. It has become important to utilize every single cycle of every compute device available in ...
- research-articleSeptember 2015
Plan B: Interruption of Ongoing MPI Operations to Support Failure Recovery
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 11, Pages 1–9https://doi.org/10.1145/2802658.2802668Advanced failure recovery strategies in HPC system benefit tremendously from in-place failure recovery, in which the MPI infrastructure can survive process crashes and resume communication services. In this paper we present the rationale behind the ...
- research-articleSeptember 2015
MPI Advisor: a Minimal Overhead Tool for MPI Library Performance Tuning
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 6, Pages 1–10https://doi.org/10.1145/2802658.2802667A majority of parallel applications executed on HPC clusters use MPI for communication between processes. Most users treat MPI as a black box, executing their programs using the cluster's default settings. While the default settings perform adequately ...
- research-articleSeptember 2015
A Memory Management System Optimized for BDMPI's Memory and Execution Model
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 2, Pages 1–10https://doi.org/10.1145/2802658.2802666There is a growing need to perform large computations on small systems, as access to large systems is not widely available and cannot keep up with the scaling of data. BDMPI was recently introduced as a way of achieving this for applications written in ...
- research-articleSeptember 2015
Detecting Silent Data Corruption for Extreme-Scale MPI Applications
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 12, Pages 1–10https://doi.org/10.1145/2802658.2802665Next-generation supercomputers are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving ...
- research-articleSeptember 2015
MPI-focused Tracing with OTFX: An MPI-aware In-memory Event Tracing Extension to the Open Trace Format 2
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 7, Pages 1–8https://doi.org/10.1145/2802658.2802664Performance analysis tools are more than ever inevitable to develop applications that utilize the enormous computing resources of high performance computing (HPC) systems. In event-based performance analysis the amount of collected data is one of the ...
- research-articleSeptember 2015
Isomorphic, Sparse MPI-like Collective Communication Operations for Parallel Stencil Computations
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 10, Pages 1–10https://doi.org/10.1145/2802658.2802663We propose a specification and discuss implementations of collective operations for parallel stencil-like computations that are not supported well by the current MPI 3.1 neighborhood collectives. In our isomorphic, sparse collectives all processes ...
- research-articleSeptember 2015
On the Impact of Synchronizing Clocks and Processes on Benchmarking MPI Collectives
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 8, Pages 1–10https://doi.org/10.1145/2802658.2802662We consider the problem of accurately measuring the time to complete an MPI collective operation, as the result strongly depends on how the time is measured. Our goal is to develop an experimental method that allows for reproducible measurements of MPI ...
- research-articleSeptember 2015
Scalable and Fault Tolerant Failure Detection and Consensus
EuroMPI '15: Proceedings of the 22nd European MPI Users' Group MeetingArticle No.: 13, Pages 1–9https://doi.org/10.1145/2802658.2802660Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes ...