Why Globally Re-shuffle? Revisiting Data Shuffling in Large Scale Deep Learning

  • Truong Thao Nguyen
  • , Francois Trahay
  • , Jens Domke
  • , Aleksandr Drozd
  • , Emil Vatai
  • , Jianwei Liao
  • , Mohamed Wahib
  • , Balazs Gerofi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Stochastic gradient descent (SGD) is the most prevalent algorithm for training Deep Neural Networks (DNN). SGD iterates the input data set in each training epoch processing data samples in a random access fashion. Because this puts enormous pressure on the I/O subsystem, the most common approach to distributed SGD in HPC environments is to replicate the entire dataset to node local SSDs. However, due to rapidly growing data set sizes this approach has become increasingly infeasible. Surprisingly, the questions of why and to what extent random access is required have not received a lot of attention in the literature from an empirical standpoint. In this paper, we revisit data shuffling in DL workloads to investigate the viability of partitioning the dataset among workers and performing only a partial distributed exchange of samples in each training epoch. Through extensive experiments on up to 2,048 GPUs of ABCI and 4,096 compute nodes of Fugaku, we demonstrate that in practice validation accuracy of global shuffling can be maintained when carefully tuning the partial distributed exchange. We provide a solution implemented in PyTorch that enables users to control the proposed data exchange scheme.

Original languageEnglish
Title of host publicationProceedings - 2022 IEEE 36th International Parallel and Distributed Processing Symposium, IPDPS 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1085-1096
Number of pages12
ISBN (Electronic)9781665481069
DOIs
Publication statusPublished - 1 Jan 2022
Event36th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2022 - Virtual, Online, France
Duration: 30 May 20223 Jun 2022

Publication series

NameProceedings - 2022 IEEE 36th International Parallel and Distributed Processing Symposium, IPDPS 2022

Conference

Conference36th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2022
Country/TerritoryFrance
CityVirtual, Online
Period30/05/223/06/22

Keywords

  • Data Shuffling
  • Distributed Deep Learning
  • I/O

Fingerprint

Dive into the research topics of 'Why Globally Re-shuffle? Revisiting Data Shuffling in Large Scale Deep Learning'. Together they form a unique fingerprint.

Cite this