论文标题

分析Q学习的适应性和动量重新启动以梯度下降

Analysis of Q-learning with Adaptation and Momentum Restart for Gradient Descent

论文作者

Weng, Bowen, Xiong, Huaqing, Liang, Yingbin, Zhang, Wei

论文摘要

Q学习的现有收敛分析主要集中在Vanilla随机梯度下降(SGD)类型的更新类型上。尽管自适应力矩估计(ADAM)通常用于实用Q学习算法,但没有提供任何类型的更新类型的Q学习的收敛保证。在本文中,我们首先表征了Q-AMSGRAD的收敛速率,Q-AMSGrad是具有AMSGRAD更新的Q学习算法(通常采用ADAM的替代方案,用于理论分析)。为了进一步提高性能,我们建议将动量重新启动方案纳入Q-AMSGRAD,从而导致所谓的Q-AMSGRADR算法。还建立了Q-Amsgradr的收敛速率。我们在线性二次调节器问题上进行的实验表明,这两个提出的Q学习算法的表现优于SGD更新的香草Q学习。在Atari 2600游戏中,这两种算法也表现出比DQN学习方法的性能要好得多。

Existing convergence analyses of Q-learning mostly focus on the vanilla stochastic gradient descent (SGD) type of updates. Despite the Adaptive Moment Estimation (Adam) has been commonly used for practical Q-learning algorithms, there has not been any convergence guarantee provided for Q-learning with such type of updates. In this paper, we first characterize the convergence rate for Q-AMSGrad, which is the Q-learning algorithm with AMSGrad update (a commonly adopted alternative of Adam for theoretical analysis). To further improve the performance, we propose to incorporate the momentum restart scheme to Q-AMSGrad, resulting in the so-called Q-AMSGradR algorithm. The convergence rate of Q-AMSGradR is also established. Our experiments on a linear quadratic regulator problem show that the two proposed Q-learning algorithms outperform the vanilla Q-learning with SGD updates. The two algorithms also exhibit significantly better performance than the DQN learning method over a batch of Atari 2600 games.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源