Learning heteroscedastic models by convex programming under group sparsity

Research output: Contribution to conferencePaperpeer-review

Abstract

Popular sparse estimation methods based on ℓ1-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter. This constitutes a major obstacle in applying these methods in several frameworks - such as time series, random fields, inverse problems - for which the noise is rarely homoscedastic and its level is hard to know in advance. In this paper, we propose a new approach to the joint estimation of the conditional mean and the conditional variance in a high-dimensional (auto-) regression setting. An attractive feature of the proposed estimator is that it is efficiently computable even for very large scale problems by solving a second-order cone program (SOCP). We present theoretical analysis and numerical results assessing the performance of the proposed procedure.

Original languageEnglish
Pages1416-1424
Number of pages9
Publication statusPublished - 1 Jan 2013
Externally publishedYes
Event30th International Conference on Machine Learning, ICML 2013 - Atlanta, GA, United States
Duration: 16 Jun 201321 Jun 2013

Conference

Conference30th International Conference on Machine Learning, ICML 2013
Country/TerritoryUnited States
CityAtlanta, GA
Period16/06/1321/06/13

Fingerprint

Dive into the research topics of 'Learning heteroscedastic models by convex programming under group sparsity'. Together they form a unique fingerprint.

Cite this