Improved optimistic algorithms for logistic bandits

Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The generalized linear bandit framework has attracted a lot of attention in recent years by extending the well-understood linear setting and allowing to model richer reward structures. It notably covers the logistic model, widely used when rewards are binary. For logistic bandits, the frequentist regret guarantees of existing algorithms are ~O(_pT), where is a problemdependent constant. Unfortunately, can be arbitrarily large as it scales exponentially with the size of the decision set. This may lead to significantly loose regret bounds and poor empirical performance. In this work, we study the logistic bandit with a focus on the prohibitive dependencies introduced by K. We propose a new optimistic algorithm based on a finer examination of the non-linearities of the reward function. We show that it enjoys a ~O (pT) regret with no dependency in , but for a second order term. Our analysis is based on a new tail-inequality for selfnormalized martingales, of independent interest.

Original languageEnglish
Title of host publication37th International Conference on Machine Learning, ICML 2020
EditorsHal Daume, Aarti Singh
PublisherInternational Machine Learning Society (IMLS)
Pages3033-3041
Number of pages9
ISBN (Electronic)9781713821120
Publication statusPublished - 1 Jan 2020
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: 13 Jul 202018 Jul 2020

Publication series

Name37th International Conference on Machine Learning, ICML 2020
VolumePartF168147-4

Conference

Conference37th International Conference on Machine Learning, ICML 2020
CityVirtual, Online
Period13/07/2018/07/20

Fingerprint

Dive into the research topics of 'Improved optimistic algorithms for logistic bandits'. Together they form a unique fingerprint.

Cite this