Abstract
We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm O(log(k)/√k) convergence rate for general convex objectives and O(log(k)/k) convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factor, even without constraints. We conduct numerical experiments on basis pursuit, hard margin support vector machines and portfolio optimization problems and show that our algorithm achieves state-of-theart practical performance.
| Original language | English |
|---|---|
| Pages (from-to) | 1910-1919 |
| Number of pages | 10 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 97 |
| Publication status | Published - 1 Jan 2019 |
| Externally published | Yes |
| Event | 36th International Conference on Machine Learning, ICML 2019 - Long Beach, United States Duration: 9 Jun 2019 → 15 Jun 2019 |