Abstract
We design a randomised parallel version of Adaboost based on previous studies on parallel coordinate descent. The algorithm uses the fact that the logarithm of the exponential loss is a function with coordinate-wise Lipschitz continuous gradient, in order to define the step lengths. We provide the proof of convergence for this randomised Adaboost algorithm and a theoretical parallelisation speedup factor. We finally provide numerical examples on learning problems of various sizes that show that the algorithm is competitive with concurrent approaches, especially for large scale problems.
| Original language | English |
|---|---|
| Pages | 354-358 |
| Number of pages | 5 |
| DOIs | |
| Publication status | Published - 1 Jan 2013 |
| Externally published | Yes |
| Event | 2013 12th International Conference on Machine Learning and Applications, ICMLA 2013 - Miami, FL, United States Duration: 4 Dec 2013 → 7 Dec 2013 |
Conference
| Conference | 2013 12th International Conference on Machine Learning and Applications, ICMLA 2013 |
|---|---|
| Country/Territory | United States |
| City | Miami, FL |
| Period | 4/12/13 → 7/12/13 |
Keywords
- Adaboost
- iteration complexity
- parallel algorithm
- randomised coordinate descent