Optimal aggregation of classifiers in statistical learning

Research output: Contribution to journalArticlepeer-review

Abstract

Classification can be considered as nonparametric estimation of sets, where the risk is defined by means of a specific distance between sets associated with misclassification error. It is shown that the rates of convergence of classifiers depend on two parameters: the complexity of the class of candidate sets and the margin parameter. The dependence is explicitly given, indicating that optimal fast rates approaching O(n -1) can be attained, where n is the sample size, and that the proposed classifiers have the property of robustness to the margin. The main result of the paper concerns optimal aggregation of classifiers: we suggest a classifier that automatically adapts both to the complexity and to the margin, and attains the optimal fast rates, up to a logarithmic factor.

Original languageEnglish
Pages (from-to)135-166
Number of pages32
JournalAnnals of Statistics
Volume32
Issue number1
DOIs
Publication statusPublished - 1 Feb 2004

Keywords

  • Aggregation of classifiers
  • Classification
  • Complexity of classes of sets
  • Empirical processes
  • Margin
  • Optimal rates
  • Statistical learning

Fingerprint

Dive into the research topics of 'Optimal aggregation of classifiers in statistical learning'. Together they form a unique fingerprint.

Cite this