Learning from survey training samples: Rate bounds for horvitz-thompson risk minimizers

Stephan Clemencon, Guillaume Papa, Patrice Bertail

Research output: Contribution to journalConference articlepeer-review

Abstract

The generalization ability of minimizers of the empirical risk in the context of binary clas- sification has been investigated under a wide variety of complexity assumptions for the collection of classifiers over which optimization is performed. In contrast, the vast majority of the works dedicated to this issue stipulate that the training dataset used to compute the empirical risk functional is composed of i.i.d. observations and involve sharp control of uni- form deviation of i.i.d. averages from their expectation. Beyond the cases where training data are drawn uniformly without replacement among a large i.i.d. sample or modelled as a realization of a weakly dependent sequence of r.v.'s, statistical guarantees when the data used to train a classifier are drawn by means of a more general sampling/survey scheme and exhibit a complex dependence structure have not been documented in the literature yet. It is the main purpose of this paper to show that the theory of empirical risk minimization can be extended to situations where statistical learning is based on survey samples and knowl- edge of the related (first order) inclusion probabilities. Precisely, we prove that minimizing a (possibly biased) weighted version of the empirical risk, refered to as the (approximate) Horvitz-Thompson risk (HT risk), over a class of controlled complexity lead to a rate for the excess risk of the order OP((κN(logN)=n)1=2) with κN = (n=N)= miniκN i, when data are sampled by means of a rejective scheme of (deterministic) size n within a statistical population of cardinality N > n, a generalization of basic sampling without replacement with unequal probability weights <i 0. Extension to other sampling schemes are then established by a coupling argument. Beyond theoretical results, numerical experiments are displayed in order to show the relevance of HT risk minimization and that ignoring the sampling scheme used to generate the training dataset may completely jeopardize the learning procedure.

Original languageEnglish
Pages (from-to)142-157
Number of pages16
JournalJournal of Machine Learning Research
Volume63
Publication statusPublished - 1 Jan 2016
Externally publishedYes
Event8th Asian Conference on Machine Learning, ACML 2016 - Hamilton, New Zealand
Duration: 16 Nov 201618 Nov 2016

Keywords

  • Empirical risk minimization
  • Horvitz-Thompson estimator
  • Learning rate bound
  • Negatively associated random variables
  • Probabilistic theory of pattern recognition
  • Sam- pling method
  • Survey scheme

Fingerprint

Dive into the research topics of 'Learning from survey training samples: Rate bounds for horvitz-thompson risk minimizers'. Together they form a unique fingerprint.

Cite this