pq penalty for sparse linear and sparse multiple kernel multitask learning

Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Stéphane Canu

Research output: Contribution to journalArticlepeer-review

Abstract

Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on ℓ p- ℓq (with 0 ≤ p ≤1 and 1 ≤ q ≤2) mixed norms as sparsityinducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the ℓ1-ℓq penalty which helps us in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the 1-q penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the ℓ1-ℓq proximal operator. Then, for the more general case, when 0 < p < 1, we solve the resulting nonconvex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted ℓ1-ℓq sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with brain-computer interface single-trial electroencephalogram classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.

Original languageEnglish
Article number5948411
Pages (from-to)1307-1320
Number of pages14
JournalIEEE Transactions on Neural Networks
Volume22
Issue number8
DOIs
Publication statusPublished - 1 Aug 2011
Externally publishedYes

Keywords

  • Mixed norm
  • multiple kernel learning
  • multitask learning
  • sparsity
  • support vector machines

Fingerprint

Dive into the research topics of 'ℓpq penalty for sparse linear and sparse multiple kernel multitask learning'. Together they form a unique fingerprint.

Cite this