MPI thread-level checking for MPI+OpenMP applications

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

MPI is the most widely used parallel programming model. But the reducing amount of memory per compute core tends to push MPI to be mixed with shared-memory approaches like OpenMP. In such cases, the interoperability of those two models is challenging. The MPI 2.0 standard defines the so-called thread level to indicate how MPI will interact with threads. But even if hybrid programs are more common, there is still a lack in debugging tools and more precisely in thread level compliance. To fill this gap, we propose a static analysis to verify the thread-level required by an application. This work extends PARCOACH, a GCC plugin focused on the detection of MPI collective errors in MPI and MPI+OpenMP programs. We validated our analysis on computational benchmarks and applications and measured a low overhead.

Original languageEnglish
Title of host publicationEuro-Par 2015
Subtitle of host publicationParallel Processing - 21st International Conference on Parallel and Distributed Computing, Proceedings
EditorsJesper Larsson Traff, Sascha Hunold, Francesco Versaci
PublisherSpringer Verlag
Pages31-42
Number of pages12
ISBN (Print)9783662480953
DOIs
Publication statusPublished - 1 Jan 2015
Externally publishedYes
Event21st International Conference on Parallel and Distributed Computing, Euro-Par 2015 - Vienna, Austria
Duration: 24 Aug 201528 Aug 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9233
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference21st International Conference on Parallel and Distributed Computing, Euro-Par 2015
Country/TerritoryAustria
CityVienna
Period24/08/1528/08/15

Keywords

  • MPI
  • MPI thread level
  • OpenMP
  • Static verification

Fingerprint

Dive into the research topics of 'MPI thread-level checking for MPI+OpenMP applications'. Together they form a unique fingerprint.

Cite this