Infinite Task Learning in RKHSs

Research output: Contribution to journalConference articlepeer-review

Abstract

Machine learning has witnessed tremendous success in solving tasks depending on a single hyperparameter. When considering simultaneously a finite number of tasks, multitask learning enables one to account for the similarities of the tasks via appropriate regularizers. A step further consists of learning a continuum of tasks for various loss functions. A promising approach, called Parametric Task Learning, has paved the way in the continuum setting for affine models and piecewise-linear loss functions. In this work, we introduce a novel approach called Infinite Task Learning: its goal is to learn a function whose output is a function over the hyperparameter space. We leverage tools from operator-valued kernels and the associated Vector-Valued Reproducing Kernel Hilbert Space that provide an explicit control over the role of the hyperparameters, and also allows us to consider new type of constraints. We provide generalization guarantees to the suggested scheme and illustrate its efficiency in cost-sensitive classification, quantile regression and density level set estimation.

Original languageEnglish
Pages (from-to)1294-1302
Number of pages9
JournalProceedings of Machine Learning Research
Volume89
Publication statusPublished - 1 Jan 2019
Event22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan
Duration: 16 Apr 201918 Apr 2019

Fingerprint

Dive into the research topics of 'Infinite Task Learning in RKHSs'. Together they form a unique fingerprint.

Cite this