Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization

Umut Şimşekli, Çağatay Yıldız, Thanh Huy Nguyen, Gaël Richard, A. Taylan Cemgil

Research output: Contribution to journalConference articlepeer-review

Abstract

Recent studies have illustrated that stochastic gradient Markov Chain Monte Carlo techniques have a strong potential in non-convex optimization, where local and global convergence guarantees can be shown under certain conditions. By building up on this recent theory, in this study, we develop an asynchronous-parallel stochastic L-BFGS algorithm for non-convex optimization. The proposed algorithm is suitable for both distributed and shared-memory settings. We provide formal theoretical analysis and show that the proposed method achieves an ergodic convergence rate of O(1/ N) (N being the total number of iterations) and it can achieve a linear speedup under certain conditions. We perform several experiments on both synthetic and real datasets. The results support our theory and show that the proposed algorithm provides a significant speedup over the recently proposed synchronous distributed L-BFGS algorithm.

Original languageEnglish
Pages (from-to)4674-4683
Number of pages10
JournalProceedings of Machine Learning Research
Volume80
Publication statusPublished - 1 Jan 2018
Externally publishedYes
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

Fingerprint

Dive into the research topics of 'Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization'. Together they form a unique fingerprint.

Cite this