Bayesian Off-Policy Evaluation and Learning for Large Action Spaces

Research output: Contribution to journalConference articlepeer-review

Abstract

In interactive systems, actions are often correlated, presenting an opportunity for more sample-efficient off-policy evaluation (OPE) and learning (OPL) in large action spaces. We introduce a unified Bayesian framework to capture these correlations through structured and informative priors. In this framework, we propose sDM, a generic Bayesian approach for OPE and OPL, grounded in both algorithmic and theoretical foundations. Notably, sDM leverages action correlations without compromising computational efficiency. Moreover, inspired by online Bayesian bandits, we introduce Bayesian metrics that assess the average performance of algorithms across multiple problem instances, deviating from the conventional worst-case assessments. We analyze sDM in OPE and OPL, highlighting the benefits of leveraging action correlations. Empirical evidence showcases the strong performance of sDM.

Original languageEnglish
Pages (from-to)136-144
Number of pages9
JournalProceedings of Machine Learning Research
Volume258
Publication statusPublished - 1 Jan 2025
Externally publishedYes
Event28th International Conference on Artificial Intelligence and Statistics, AISTATS 2025 - Mai Khao, Thailand
Duration: 3 May 20255 May 2025

Fingerprint

Dive into the research topics of 'Bayesian Off-Policy Evaluation and Learning for Large Action Spaces'. Together they form a unique fingerprint.

Cite this