Fast distributed coordinate descent for non-strongly convex losses

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal O(1/k2) convergence rate, where k is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer - the largest super-computer in the UK - and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.

Original languageEnglish
Title of host publicationIEEE International Workshop on Machine Learning for Signal Processing, MLSP
EditorsMamadou Mboup, Tulay Adali, Eric Moreau, Jan Larsen
PublisherIEEE Computer Society
ISBN (Electronic)9781479936946
DOIs
Publication statusPublished - 14 Nov 2014
Externally publishedYes
Event2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014 - Reims, France
Duration: 21 Sept 201424 Sept 2014

Publication series

NameIEEE International Workshop on Machine Learning for Signal Processing, MLSP
ISSN (Print)2161-0363
ISSN (Electronic)2161-0371

Conference

Conference2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014
Country/TerritoryFrance
CityReims
Period21/09/1424/09/14

Keywords

  • Coordinate descent
  • acceleration
  • distributed algorithms

Fingerprint

Dive into the research topics of 'Fast distributed coordinate descent for non-strongly convex losses'. Together they form a unique fingerprint.

Cite this