论文标题

用软壁垒执行硬性约束:未知随机环境中的安全加固学习

Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments

论文作者

Wang, Yixuan, Zhan, Simon Sinong, Jiao, Ruochen, Wang, Zhilu, Jin, Wanxin, Yang, Zhuoran, Wang, Zhaoran, Huang, Chao, Zhu, Qi

论文摘要

确保在硬限制下,在未知和随机环境中确保强化学习的安全性(RL)的安全是非常具有挑战性的,这需要系统状态不得达到某些指定的不安全区域。许多流行的安全RL方法,例如基于受约束的马尔可夫决策过程(CMDP)范式的方法,范式在成本功能中违反了安全违规,并试图限制阈值下的累积成本的期望。但是,通常很难有效地捕获和执行基于艰苦的安全性限制,并在安全违规成本上限制此类限制。在这项工作中,我们利用屏障功能的概念明确编码硬安全性约束,并鉴于环境未知,请放松它们\ emph {基于生成模型的软性屏障功能}的设计。基于这样的软障碍,我们提出了一种安全的RL方法,可以共同学习环境并优化控制政策,同时有效地避免了具有安全概率优化的不安全区域。在一组示例上进行的实验表明,我们的方法可以有效地强制执行硬安全性限制,并且在通过模拟测量的系统安全速率中显着优于基于CMDP的基线方法。

It is quite challenging to ensure the safety of reinforcement learning (RL) agents in an unknown and stochastic environment under hard constraints that require the system state not to reach certain specified unsafe regions. Many popular safe RL methods such as those based on the Constrained Markov Decision Process (CMDP) paradigm formulate safety violations in a cost function and try to constrain the expectation of cumulative cost under a threshold. However, it is often difficult to effectively capture and enforce hard reachability-based safety constraints indirectly with such constraints on safety violation costs. In this work, we leverage the notion of barrier function to explicitly encode the hard safety constraints, and given that the environment is unknown, relax them to our design of \emph{generative-model-based soft barrier functions}. Based on such soft barriers, we propose a safe RL approach that can jointly learn the environment and optimize the control policy, while effectively avoiding unsafe regions with safety probability optimization. Experiments on a set of examples demonstrate that our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源