D-FW: Communication efficient distributed algorithms for high-dimensional sparse optimization

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We propose distributed algorithms for high-dimensional sparse optimization. In many applications, the parameter is sparse but high-dimensional. This is pathological for existing distributed algorithms as the latter require an information exchange stage involving transmission of the full parameter, which may not be sparse during the intermediate steps of optimization. The novelty of this work is to develop communication efficient algorithms using the stochastic Frank-Wolfe (sFW) algorithm, where the gradient computation is inexact but controllable. For star network topology, we propose an algorithm with low communication cost and establishes its convergence. The proposed algorithm is then extended to perform decentralized optimization on general network topology. Numerical experiments are conducted to verify our findings.

Original languageEnglish
Title of host publication2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4144-4148
Number of pages5
ISBN (Electronic)9781479999880
DOIs
Publication statusPublished - 18 May 2016
Event41st IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016 - Shanghai, China
Duration: 20 Mar 201625 Mar 2016

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2016-May
ISSN (Print)1520-6149

Conference

Conference41st IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016
Country/TerritoryChina
CityShanghai
Period20/03/1625/03/16

Keywords

  • communication efficient algorithm
  • decentralized algorithm
  • large-scale optimization
  • sparse optimization

Fingerprint

Dive into the research topics of 'D-FW: Communication efficient distributed algorithms for high-dimensional sparse optimization'. Together they form a unique fingerprint.

Cite this