Parallel coordinate descent for the adaboost problem

Research output: Contribution to conferencePaperpeer-review

Abstract

We design a randomised parallel version of Adaboost based on previous studies on parallel coordinate descent. The algorithm uses the fact that the logarithm of the exponential loss is a function with coordinate-wise Lipschitz continuous gradient, in order to define the step lengths. We provide the proof of convergence for this randomised Adaboost algorithm and a theoretical parallelisation speedup factor. We finally provide numerical examples on learning problems of various sizes that show that the algorithm is competitive with concurrent approaches, especially for large scale problems.

Original languageEnglish
Pages354-358
Number of pages5
DOIs
Publication statusPublished - 1 Jan 2013
Externally publishedYes
Event2013 12th International Conference on Machine Learning and Applications, ICMLA 2013 - Miami, FL, United States
Duration: 4 Dec 20137 Dec 2013

Conference

Conference2013 12th International Conference on Machine Learning and Applications, ICMLA 2013
Country/TerritoryUnited States
CityMiami, FL
Period4/12/137/12/13

Keywords

  • Adaboost
  • iteration complexity
  • parallel algorithm
  • randomised coordinate descent

Fingerprint

Dive into the research topics of 'Parallel coordinate descent for the adaboost problem'. Together they form a unique fingerprint.

Cite this