论文标题

关于生成对抗性模仿学习的计算和概括

On Computation and Generalization of Generative Adversarial Imitation Learning

论文作者

Chen, Minshuo, Wang, Yizhou, Liu, Tianyi, Yang, Zhuoran, Li, Xingguo, Wang, Zhaoran, Zhao, Tuo

论文摘要

生成的对抗性模仿学习(GAIL)是学习顺序决策政策的强大而实用的方法。与强化学习(RL)不同,盖尔(Gail)利用专家(例如人类)的演示数据,并学习未知环境的政策和奖励功能。尽管取得了重大的经验进步,但盖尔背后的理论仍然很大程度上未知。主要的困难来自演示数据的基本时间依赖性以及无凸形结构的Gail的最小值计算公式。为了弥合理论和实践之间的差距,本文研究了盖尔的理论特性。具体来说,我们显示:(1)对于具有一般奖励参数化的Gail,只要奖励函数的类别得到适当控制,就可以保证概括; (2)对于盖尔(Gail),奖励被参数化为再现内核函数,可以通过随机一阶优化算法有效地求解盖尔,从而使sublinear收敛到固定溶液。据我们所知,这些是对模仿学习的统计和计算保证的第一个结果,并具有奖励/政策功能近似。提供数值实验以支持我们的分析。

Generative Adversarial Imitation Learning (GAIL) is a powerful and practical approach for learning sequential decision-making policies. Different from Reinforcement Learning (RL), GAIL takes advantage of demonstration data by experts (e.g., human), and learns both the policy and reward function of the unknown environment. Despite the significant empirical progresses, the theory behind GAIL is still largely unknown. The major difficulty comes from the underlying temporal dependency of the demonstration data and the minimax computational formulation of GAIL without convex-concave structure. To bridge such a gap between theory and practice, this paper investigates the theoretical properties of GAIL. Specifically, we show: (1) For GAIL with general reward parameterization, the generalization can be guaranteed as long as the class of the reward functions is properly controlled; (2) For GAIL, where the reward is parameterized as a reproducing kernel function, GAIL can be efficiently solved by stochastic first order optimization algorithms, which attain sublinear convergence to a stationary solution. To the best of our knowledge, these are the first results on statistical and computational guarantees of imitation learning with reward/policy function approximation. Numerical experiments are provided to support our analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源