Optimal algorithms for smooth and strongly convex distributed optimization in networks

  • Kevin Seaman
  • , Francis Bach
  • , Sebastien Bubeck
  • , Yin Tat Lee
  • , Laurent Massoulie

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Theory group, Microsoft Research, Redmond, United Stales, In this paper, wc determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision e > 0 in time 0(KJ(1 + Ar)ln(l/e)), where Kg is the condition number of the (global) function to optimize, A is the diameter of the network, and r (resp. 1) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision e > 0 in time 0 0 7 ( 1 + where KJ is the condition number of the local functions and 7 is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-theart methods for two problems: least-squares regression and classification by logistic regression.

Original languageEnglish
Title of host publication34th International Conference on Machine Learning, ICML 2017
PublisherInternational Machine Learning Society (IMLS)
Pages4630-4642
Number of pages13
ISBN (Electronic)9781510855144
Publication statusPublished - 1 Jan 2017
Externally publishedYes
Event34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia
Duration: 6 Aug 201711 Aug 2017

Publication series

Name34th International Conference on Machine Learning, ICML 2017
Volume6

Conference

Conference34th International Conference on Machine Learning, ICML 2017
Country/TerritoryAustralia
CitySydney
Period6/08/1711/08/17

Fingerprint

Dive into the research topics of 'Optimal algorithms for smooth and strongly convex distributed optimization in networks'. Together they form a unique fingerprint.

Cite this