Compositional Shield Synthesis for Safe Reinforcement Learning in Partial Observability

Research output: Contribution to journalArticlepeer-review

Abstract

Agents controlled by the output of reinforcement learning (RL) algorithms often transition to unsafe states, particularly in uncertain and partially observable environments. Partially observable Markov decision processes (POMDPs) provide a natural setting for studying such scenarios with limited sensing. Shields filter undesirable actions to ensure safe RL by preserving safety requirements in the agents’ policy. However, synthesizing holistic shields is computationally expensive in complex deployment scenarios. We propose the compositional synthesis of shields by modeling safety requirements by parts, thereby improving scalability. In particular, problem formulations in the form of POMDPs using RL algorithms illustrate that an RL agent equipped with the resulting compositional shielding, beyond being safe, converges to higher values of expected reward. By using subproblem formulations, we preserve and improve the ability of shielded agents to require fewer training episodes than unshielded agents, especially in sparse-reward settings. Concretely, we find that compositional shield synthesis allows an RL agent to remain safe in environments two orders of magnitude larger than other state-of-the-art model-based approaches.

Original languageEnglish
Pages (from-to)373-384
Number of pages12
JournalIEEE Open Journal of Control Systems
Volume4
DOIs
Publication statusPublished - 1 Jan 2025

Keywords

  • And shielding
  • compositionality
  • reinforcement learning
  • safety
  • uncertainty

Fingerprint

Dive into the research topics of 'Compositional Shield Synthesis for Safe Reinforcement Learning in Partial Observability'. Together they form a unique fingerprint.

Cite this