Communication-Aware Task Scheduling Strategy in Hybrid MPI+OpenMP Applications

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

While task-based programming, such as OpenMP, is a promising solution to exploit large HPC compute nodes, it has to be mixed with data communications like MPI. However, performance or even more thread progression may depend on the underlying runtime implementations. In this paper, we focus on enhancing the application performance when an OpenMP task blocks inside MPI communications. This technique requires no additional effort on the application developers. It relies on an online task re-ordering strategy that aims at running first tasks that are sending data to other processes. We evaluate our approach on a Cholesky factorization and show that we gain around 19% of execution time on an Intel Skylake compute nodes machine - each node having two 24-core processors.

Original languageEnglish
Title of host publicationOpenMP
Subtitle of host publicationEnabling Massive Node-Level Parallelism - 17th International Workshop on OpenMP, IWOMP 2021, Proceedings
EditorsSimon McIntosh-Smith, Bronis R. de Supinski, Jannis Klinkenberg
PublisherSpringer Science and Business Media Deutschland GmbH
Pages197-210
Number of pages14
ISBN (Print)9783030852610
DOIs
Publication statusPublished - 1 Jan 2021
Externally publishedYes
Event17th International Workshop on OpenMP, IWOMP 2021 - Bristol, United Kingdom
Duration: 14 Sept 202116 Sept 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12870 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th International Workshop on OpenMP, IWOMP 2021
Country/TerritoryUnited Kingdom
CityBristol
Period14/09/2116/09/21

Keywords

  • Asynchronism
  • MPI+OpenMP
  • Scheduling
  • Task

Fingerprint

Dive into the research topics of 'Communication-Aware Task Scheduling Strategy in Hybrid MPI+OpenMP Applications'. Together they form a unique fingerprint.

Cite this