论文标题
价值驱动的后视建模
Value-driven Hindsight Modelling
论文作者
论文摘要
价值估计是加固学习(RL)范式的关键组成部分。如何从数据中有效学习价值预测因子的问题是RL社区研究的主要问题之一,并且不同的方法以不同的方式利用问题域中的结构。模型学习可以利用观测序列中存在的丰富过渡结构,但是这种方法通常对奖励功能不敏感。相比之下,无模型方法直接利用未来的利益量,但会收到潜在的弱标量信号(回报的估计值)。我们开发了一种在RL中的表示方法,该方法位于这两个极端之间:我们建议以直接有助于价值预测的方式来学习建模的方法。为此,我们确定未来轨迹的哪些功能提供了有用的信息来预测相关的回报。这提供了与任务直接相关的可拖动预测目标,因此可以加速学习价值函数。事后看来,这个想法可以理解为推理,即未来观察的哪些方面可以帮助过去的价值预测。我们展示了即使在简单的策略评估设置中,这也可以极大地帮助。然后,我们在具有挑战性的领域中测试了我们的大规模方法,包括57场Atari 2600场比赛。
Value estimation is a critical component of the reinforcement learning (RL) paradigm. The question of how to effectively learn value predictors from data is one of the major problems studied by the RL community, and different approaches exploit structure in the problem domain in different ways. Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function. In contrast, model-free methods directly leverage the quantity of interest from the future, but receive a potentially weak scalar signal (an estimate of the return). We develop an approach for representation learning in RL that sits in between these two extremes: we propose to learn what to model in a way that can directly help value prediction. To this end, we determine which features of the future trajectory provide useful information to predict the associated return. This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function. The idea can be understood as reasoning, in hindsight, about which aspects of the future observations could help past value prediction. We show how this can help dramatically even in simple policy evaluation settings. We then test our approach at scale in challenging domains, including on 57 Atari 2600 games.