Improved Optimistic Algorithms for Logistic Bandits

Research output: Contribution to journalConference articlepeer-review

Abstract

The generalized linear bandit framework has at tracted a lot of attention in recent years by extending the well-understood linear setting and allowing to model richer reward structures. It notably covers the logistic model, widely used when rewards are binary. For logistic bandits, the frequentistregret guarantees of existing algorithms are (Formula present), where is a problem dependent constant. Unfortunately, can be arbitrarily large as it scales exponentially with the size of the decision set. This may lead to significantly loose regret bounds and poor empirical performance. In this work, we study the logistic bandit with a focus on the prohibitive dependencies introduced by. We propose a new optimistic algorithm based on a finer examination of the non-linearities of the reward function. We show that it enjoys a (Formula present) regret with no de pendency in, but for a second order term. Our analysis is based on a new tail-inequality for self normalized martingales, of independent interest.

Original languageEnglish
JournalProceedings of Machine Learning Research
Volume119
Publication statusPublished - 1 Jan 2020
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: 13 Jul 202018 Jul 2020

Fingerprint

Dive into the research topics of 'Improved Optimistic Algorithms for Logistic Bandits'. Together they form a unique fingerprint.

Cite this