Abstract
Off-policy learning (OPL) often involves minimizing a risk estimator based on importance weighting to correct bias from the logging policy used to collect data. However, this method can produce an estimator with a high variance. A common solution is to regularize the importance weights and learn the policy by minimizing an estimator with penalties derived from generalization bounds specific to the estimator. This approach, known as pessimism, has gained recent attention but lacks a unified framework for analysis. To address this gap, we introduce a comprehensive PAC-Bayesian framework to examine pessimism with regularized importance weighting. We derive a tractable PAC-Bayesian generalization bound that universally applies to common importance weight regularizations, enabling their comparison within a single framework. Our empirical results challenge common understanding, demonstrating the effectiveness of standard IW regularization techniques.
| Original language | English |
|---|---|
| Pages (from-to) | 88-109 |
| Number of pages | 22 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 244 |
| Publication status | Published - 1 Jan 2024 |
| Event | 40th Conference on Uncertainty in Artificial Intelligence, UAI 2024 - Barcelona, Spain Duration: 15 Jul 2024 → 19 Jul 2024 |
Fingerprint
Dive into the research topics of 'Unified PAC-Bayesian Study of Pessimism for Offline Policy Learning with Regularized Importance Sampling'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver