Skip to main navigation Skip to search Skip to main content

Optimizing collective operations in hybrid applications

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The advent of multicore and manycore processors in clusters advocates for combining MPI with a shared memory model like OpenMP in high-performance parallel applications. But exploiting hardware resources with such models can be sub optimal. Thus, one approach is to use the hybrid context to perform MPI communications. In this paper, we address this issue with a concept of hybrid collective communications, which consists in using OpenMP threads to parallelize MPI collectives. We validate our approach on several MPI libraries (IntelMPI and MPC), improving the overall time up to a factor of 5.29×, in a real world application.

Original languageEnglish
Title of host publicationProceedings of the 21st European MPI Users' Group Meeting, EuroMPI/ASIA 2014
PublisherAssociation for Computing Machinery
Pages121-122
Number of pages2
ISBN (Electronic)9781450328753
DOIs
Publication statusPublished - 9 Sept 2014
Externally publishedYes
Event21st European MPI Users' Group Meeting, EuroMPI/ASIA 2014 - Kyoto, Japan
Duration: 9 Sept 201412 Sept 2014

Publication series

NameACM International Conference Proceeding Series
Volume09-12-September-2014

Conference

Conference21st European MPI Users' Group Meeting, EuroMPI/ASIA 2014
Country/TerritoryJapan
CityKyoto
Period9/09/1412/09/14

Keywords

  • Collective Communications
  • MPI
  • OpenMP

Fingerprint

Dive into the research topics of 'Optimizing collective operations in hybrid applications'. Together they form a unique fingerprint.

Cite this