The multi-armed bandit problem with covariates

Research output: Contribution to journalArticlepeer-review

Abstract

We consider a multi-armed bandit problem in a setting where each arm produces a noisy reward realization which depends on an observable random covariate. As opposed to the traditional static multi-armed bandit problem, this setting allows for dynamically changing rewards that better describe applications where side information is available. We adopt a nonparametric model where the expected rewards are smooth functions of the covariate and where the hardness of the problem is captured by a margin parameter. To maximize the expected cumulative reward, we introduce a policy called Adaptively Binned Successive Elimination (ABSE) that adaptively decomposes the global problem into suitably "localized" static bandit problems. This policy constructs an adaptive partition using a variant of the Successive Elimination (SE) policy. Our results include sharper regret bounds for the SE policy in a static bandit problem and minimax optimal regret bounds for the ABSE policy in the dynamic problem.

Original languageEnglish
Pages (from-to)693-721
Number of pages29
JournalAnnals of Statistics
Volume41
Issue number2
DOIs
Publication statusPublished - 1 Apr 2013
Externally publishedYes

Keywords

  • Adaptive partition
  • Contextual bandit
  • Multi-armed bandit
  • Nonparametric bandit
  • Regret bounds
  • Sequential allocation
  • Successive elimination

Fingerprint

Dive into the research topics of 'The multi-armed bandit problem with covariates'. Together they form a unique fingerprint.

Cite this