Optimal survey schemes for stochastic gradient descent with applications to M -estimation

Stephan Clémençon, Patrice Bertail, Emilie Chautru, Guillaume Papa

Research output: Contribution to journalArticlepeer-review

Abstract

Iterative stochastic approximation methods are widely used to solve M-estimation problems, in the context of predictive learning in particular. In certain situations that shall be undoubtedly more and more common in the Big Data era, the datasets available are so massive that computing statistics over the full sample is hardly feasible, if not unfeasible. A natural and popular approach to gradient descent in this context consists in substituting the "full data" statistics with their counterparts based on subsamples picked at random of manageable size. It is the main purpose of this paper to investigate the impact of survey sampling with unequal inclusion probabilities on stochastic gradient descent-based M-estimation methods. Precisely, we prove that, in presence of some a priori information, one may significantly increase statistical accuracy in terms of limit variance, when choosing appropriate first order inclusion probabilities. These results are described by asymptotic theorems and are also supported by illustrative numerical experiments.

Original languageEnglish
Pages (from-to)310-337
Number of pages28
JournalESAIM - Probability and Statistics
Volume23
DOIs
Publication statusPublished - 1 Jan 2019
Externally publishedYes

Keywords

  • Asymptotic analysis
  • Central limit theorem
  • Horvitz-Thompson estimator
  • M-estimation
  • Poisson sampling
  • Stochastic gradient descent
  • Survey scheme

Fingerprint

Dive into the research topics of 'Optimal survey schemes for stochastic gradient descent with applications to M -estimation'. Together they form a unique fingerprint.

Cite this