论文标题
用户交互式离线增强学习
User-Interactive Offline Reinforcement Learning
论文作者
论文摘要
离线强化学习算法仍然缺乏对实践的信任,因为学识渊博的策略的性能要比产生数据集或以用户不熟悉的意外方式的原始策略更糟糕。同时,离线RL算法无法调整其最重要的超级参数 - 学识渊博的政策与原始政策的距离。我们提出了一种算法,该算法允许用户在运行时调整此超参数,从而同时解决上述两个问题。这使用户可以从原始行为开始并依次授予更大的偏差,并在策略恶化或行为离熟悉的行为太远时停止。
Offline reinforcement learning algorithms still lack trust in practice due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. At the same time, offline RL algorithms are not able to tune their most important hyperparameter - the proximity of the learned policy to the original policy. We propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby addressing both of the above mentioned issues simultaneously. This allows users to start with the original behavior and grant successively greater deviation, as well as stopping at any time when the policy deteriorates or the behavior is too far from the familiar one.