Towards Minimax Optimality of Model-based Robust Reinforcement Learning

Research output: Contribution to journalConference articlepeer-review

Abstract

We study the sample complexity of obtaining an ϵ-optimal policy in Robust discounted Markov Decision Processes (RMDPs), given only access to a generative model of the nominal kernel. This problem is widely studied in the non-robust case, and it is known that any planning approach applied to an empirical MDP estimated with Õ(H3 |ϵS2||A|) samples provides an ϵ-optimal policy, which is minimax optimal. Results in the robust case are much more scarce. For sa- (resp s-) rectangular uncertainty sets, until recently the best-known sample complexity was Õ(H4 |Sϵ2|2 |A|) (resp. Õ(H4 |Sϵ|22 |A|2)), for specific algorithms and when the uncertainty set is based on the total variation (TV), the KL or the Chi-square divergences. In this paper, we consider uncertainty sets defined with an Lp-ball (recovering the TV case), and study the sample complexity of any planning algorithm (with high accuracy guarantee on the solution) applied to an empirical RMDP estimated using the generative model. In the general case, we prove a sample complexity of Õ(H4 |ϵS2||A|) for both the sa- and s-rectangular cases (improvements of |S| and |S||A| respectively). When the size of the uncertainty is small enough, we improve the sample complexity to Õ(H3 |ϵS2||A|), recovering the lower-bound for the non-robust case for the first time and a robust lower-bound. Finally, we also introduce simple and efficient algorithms for solving the studied Lp robust MDPs.

Original languageEnglish
Pages (from-to)820-855
Number of pages36
JournalProceedings of Machine Learning Research
Volume244
Publication statusPublished - 1 Jan 2024
Event40th Conference on Uncertainty in Artificial Intelligence, UAI 2024 - Barcelona, Spain
Duration: 15 Jul 202419 Jul 2024

Fingerprint

Dive into the research topics of 'Towards Minimax Optimality of Model-based Robust Reinforcement Learning'. Together they form a unique fingerprint.

Cite this