Abstract
Markovian systems are widely used in reinforcement learning (RL), when the successful completion of a task depends exclusively on the last interaction between an autonomous agent and its environment. Unfortunately, real-world instructions are typically complex and often better described as non-Markovian. In this paper we present an extension method that allows solving partially-observable non-Markovian reward decision processes (PONMRDPs) by solving equivalent Markovian models. This potentially facilitates Markovian-based state-of-the-art techniques, including RL, to find optimal behaviours for problems best described as PON-MRDP. We provide formal optimality guarantees of our extension methods together with a counterexample illustrating that naive extensions from existing techniques in fully-observable environments cannot provide such guarantees.
| Original language | English |
|---|---|
| Pages (from-to) | 450-457 |
| Number of pages | 8 |
| Journal | International Conference on Agents and Artificial Intelligence |
| Volume | 2 |
| DOIs | |
| Publication status | Published - 1 Jan 2022 |
| Event | 14th International Conference on Agents and Artificial Intelligence , ICAART 2022 - Virtual, Online Duration: 3 Feb 2022 → 5 Feb 2022 |
Keywords
- Extended Partially Observable Decision Process
- Markov Decision Processes
- Partial Observability
- non-Markovian Rewards