Scaffold with Stochastic Gradients: New Analysis with Linear Speed-Up

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper proposes a novel analysis for the Scaffold algorithm, a popular method for dealing with data heterogeneity in federated learning. While its convergence in deterministic settings—where local control variates mitigate client drift—is well established, the impact of stochastic gradient updates on its performance is less understood. To address this problem, we first show that its global parameters and control variates define a Markov chain that converges to a stationary distribution in the Wasserstein distance. Leveraging this result, we prove that Scaffold achieves linear speed-up in the number of clients up to higher-order terms in the step size. Nevertheless, our analysis reveals that Scaffold retains a higher-order bias, similar to FedAvg, that does not decrease as the number of clients increases. This highlights opportunities for developing improved stochastic federated learning algorithms.

Original languageEnglish
Pages (from-to)42902-42946
Number of pages45
JournalProceedings of Machine Learning Research
Volume267
Publication statusPublished - 1 Jan 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Fingerprint

Dive into the research topics of 'Scaffold with Stochastic Gradients: New Analysis with Linear Speed-Up'. Together they form a unique fingerprint.

Cite this