Profitable Bandits

Mastane Achab, Stephan Clémençon, Aurélien Garivier

Research output: Contribution to journalConference articlepeer-review

Abstract

Originally motivated by default risk management applications, this paper investigates a novel problem, referred to as the profitable bandit problem here. At each step, an agent chooses a subset of the K ≥ 1 possible actions. For each action chosen, she then respectively pays and receives the sum of a random number of costs and rewards. Her objective is to maximize her cumulated profit. We adapt and study three well-known strategies in this purpose, that were proved to be most efficient in other settings: kl-UCB, Bayes-UCB and Thompson Sampling. For each of them, we prove a finite time regret bound which, together with a lower bound we obtain as well, establishes asymptotic optimality in some cases. Our goal is also to compare these three strategies from a theoretical and empirical perspective both at the same time. We give simple, self-contained proofs that emphasize their similarities, as well as their differences. While both Bayesian strategies are automatically adapted to the geometry of information, the numerical experiments carried out show a slight advantage for Thompson Sampling in practice.

Original languageEnglish
Pages (from-to)694-709
Number of pages16
JournalProceedings of Machine Learning Research
Volume95
Publication statusPublished - 1 Jan 2018
Externally publishedYes
Event10th Asian Conference on Machine Learning, ACML 2018 - Beijing, China
Duration: 14 Nov 201816 Nov 2018

Keywords

  • bayesian policy
  • credit risk
  • index policy
  • multi-armed bandits
  • thresholding bandits

Fingerprint

Dive into the research topics of 'Profitable Bandits'. Together they form a unique fingerprint.

Cite this