Hierarchical local storage: Exploiting flexible user-data sharing between MPI tasks

Marc Tchiboukdjian, Patrick Carribault, Marc Pérache

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

With the advent of the multicore era, the number of cores per computational node is increasing faster than the amount of memory. This diminishing memory to core ratio sometimes even prevents pure MPI applications to benefit from all cores available on each node. A possible solution is to add a shared memory programming model like Open MP inside the application to share variables between Open MP threads that would otherwise be duplicated for each MPI task. Going to hybrid can thus improve the overall memory consumption, but may be a tedious task on large applications. To allow this data sharing without the overhead of mixing multiple programming models, we propose an MPI extension called Hierarchical Local Storage (HLS) that allows application developers to share common variables between MPI tasks on the same node. HLS is designed as a set of directives that preserve the original parallel semantics of the code and are compatible with C, C++ and Fortran languages and the Open MP programming model. This new mechanism is implemented inside a state-of-the-art MPI 1.3 compliant runtime called MPC. Experiments show that the HLS mechanism can effectively reduce memory consumption of HPC applications. Moreover, by reducing data duplication in the shared cache of modern multicores, the HLS mechanism can also improve performances of memory intensive applications.

Original languageEnglish
Title of host publicationProceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium, IPDPS 2012
Pages366-377
Number of pages12
DOIs
Publication statusPublished - 4 Oct 2012
Externally publishedYes
Event2012 IEEE 26th International Parallel and Distributed Processing Symposium, IPDPS 2012 - Shanghai, China
Duration: 21 May 201225 May 2012

Publication series

NameProceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium, IPDPS 2012

Conference

Conference2012 IEEE 26th International Parallel and Distributed Processing Symposium, IPDPS 2012
Country/TerritoryChina
CityShanghai
Period21/05/1225/05/12

Keywords

  • High-Performance Computing
  • Memory Consumption
  • Parallel Programming Model

Fingerprint

Dive into the research topics of 'Hierarchical local storage: Exploiting flexible user-data sharing between MPI tasks'. Together they form a unique fingerprint.

Cite this