Abstract
We study the problem of aggregation of estimators. Given a collection of M different estimators, we construct a new estimator, called aggregate, which is nearly as good as the best linear combination over an l 1-ball of ℝ M of the initial estimators. The aggregate is obtained by a particular version of the mirror averaging algorithm. We show that our aggregation procedure statisfies sharp oracle inequalities under general assumptions. Then we apply these results to a new aggregation problem: D-convex aggregation. Finally we implement our procedure in a Gaussian regression model with random design and we prove its optimality in a minimax sense up to a logarithmic factor.
| Original language | English |
|---|---|
| Pages (from-to) | 246-259 |
| Number of pages | 14 |
| Journal | Mathematical Methods of Statistics |
| Volume | 16 |
| Issue number | 3 |
| DOIs | |
| Publication status | Published - 1 Sept 2007 |
| Externally published | Yes |
Keywords
- aggregation
- learning
- mirror averaging
- sparsity
- stochastic optimization