Abstract
We propose a conditional gradient framework for a composite convex minimization template with broad applications. Our approach combines smoothing and homotopy techniques under the CGM framework, and provably achieves the optimal O(1/√ k) convergence rate. We demonstrate that the same rate holds if the linear subproblems are solved approximately with additive or multiplicative error. In contrast with the relevant work, we are able to characterize the convergence when the non-smooth term is an indicator function. Specific applications of our framework include the non-smooth minimization, semidefinite programming, and minimization with linear inclusion constraints over a compact domain. Numerical evidence demonstrates the benefits of our framework.
| Original language | English |
|---|---|
| Pages (from-to) | 5727-5736 |
| Number of pages | 10 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 80 |
| Publication status | Published - 1 Jan 2018 |
| Externally published | Yes |
| Event | 35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden Duration: 10 Jul 2018 → 15 Jul 2018 |