Regularized ERM on random subspaces

Research output: Contribution to journalConference articlepeer-review

Abstract

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space. In particular, we consider possibly data dependent subspaces spanned by a random subset of the data, recovering as a special case Nyström approaches for kernel methods. Considering random subspaces naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. These statistical-computational tradeoffs have been recently explored for the least squares loss and self-concordant loss functions, such as the logistic loss. Here, we work to extend these results to convex Lipschitz loss functions, that might not be smooth, such as the hinge loss used in support vector machines. This extension requires developing new proofs, that use different technical tools. Our main results show the existence of different settings, depending on how hard the learning problem is, for which computational efficiency can be improved with no loss in performance. Theoretical results are illustrated with simple numerical experiments.

Original languageEnglish
Pages (from-to)4006-4014
Number of pages9
JournalProceedings of Machine Learning Research
Volume130
Publication statusPublished - 1 Jan 2021
Event24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States
Duration: 13 Apr 202115 Apr 2021

Fingerprint

Dive into the research topics of 'Regularized ERM on random subspaces'. Together they form a unique fingerprint.

Cite this