论文标题
马尔可夫决策过程中的非政策风险评估
Off-Policy Risk Assessment in Markov Decision Processes
论文作者
论文摘要
解决了与人类偏好的安全结合以及学习效率之类的各种目的,越来越多的强化学习研究集中在依赖整个收益分布的风险功能上。关于\ emph {Oplicy风险评估}(OPRA)的最新工作,对上下文匪徒引入了目标策略的收益CDF以及有限样本保证的一致估计量,并保证了(并同时保留所有风险)。在本文中,我们将OPRA提升到马尔可夫决策过程(MDPS),其中重要性采样(IS)CDF估计器由于有效样本量较小而遭受较长轨迹的较大差异。为了减轻这些问题,我们合并了基于模型的估计,以为MDPS中回报的CDF开发第一个双重鲁棒(DR)估计器。该估计器的差异明显较小,并且在指定模型时,可以实现Cramer-Rao方差下限。此外,对于许多风险功能,下游估计值同时享有较低的偏差和较低的差异。此外,我们为非政策CDF和风险估计得出了第一个最小值的下限,这与我们的误差界限至恒定因子。最后,我们在几种不同的环境上实验表明了DR CDF估计的精度。
Addressing such diverse ends as safety alignment with human preferences, and the efficiency of learning, a growing line of reinforcement learning research focuses on risk functionals that depend on the entire distribution of returns. Recent work on \emph{off-policy risk assessment} (OPRA) for contextual bandits introduced consistent estimators for the target policy's CDF of returns along with finite sample guarantees that extend to (and hold simultaneously over) all risk. In this paper, we lift OPRA to Markov decision processes (MDPs), where importance sampling (IS) CDF estimators suffer high variance on longer trajectories due to small effective sample size. To mitigate these problems, we incorporate model-based estimation to develop the first doubly robust (DR) estimator for the CDF of returns in MDPs. This estimator enjoys significantly less variance and, when the model is well specified, achieves the Cramer-Rao variance lower bound. Moreover, for many risk functionals, the downstream estimates enjoy both lower bias and lower variance. Additionally, we derive the first minimax lower bounds for off-policy CDF and risk estimation, which match our error bounds up to a constant factor. Finally, we demonstrate the precision of our DR CDF estimates experimentally on several different environments.