论文标题

ELIGN:期望对准作为多代理的内在奖励

ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward

论文作者

Ma, Zixian, Wang, Rose, Fei-Fei, Li, Bernstein, Michael, Krishna, Ranjay

论文摘要

现代的多机构增强学习框架依靠集中式培训和奖励成型来表现良好。但是,在现实世界中,不容易获得集中式培训和密集的奖励。当前的多代理算法难以学习分散培训或稀疏奖励的替代设置。为了解决这些问题,我们提出了一个受动物学中的自组织原则的启发的自我监督的内在奖励策略 - 期望一致性。类似于动物如何与附近的动物分散方式合作,经过期望一致性训练的代理商学习与邻居期望相匹配的行为。这使代理商可以学习协作行为,而无需任何外部奖励或集中培训。我们证明了方法在多个粒子和复杂的Google研究足球环境中的6个任务中的疗效,将其与基于稀疏和好奇心的内在奖励进行了比较。当代理的数量增加时,除了代理具有不同功能的一个多代理任务外,Elign量表都很好。我们表明,代理协调可以通过期望的一致性提高,因为代理商学会了将任务划分为自己,打破协调对称性并使对手感到困惑。这些结果确定了与好奇心驱动的多代理协调探索更有用的策略的任务,使代理人可以进行零射击协调。

Modern multi-agent reinforcement learning frameworks rely on centralized training and reward shaping to perform well. However, centralized training and dense rewards are not readily available in the real world. Current multi-agent algorithms struggle to learn in the alternative setup of decentralized training or sparse rewards. To address these issues, we propose a self-supervised intrinsic reward ELIGN - expectation alignment - inspired by the self-organization principle in Zoology. Similar to how animals collaborate in a decentralized manner with those in their vicinity, agents trained with expectation alignment learn behaviors that match their neighbors' expectations. This allows the agents to learn collaborative behaviors without any external reward or centralized training. We demonstrate the efficacy of our approach across 6 tasks in the multi-agent particle and the complex Google Research football environments, comparing ELIGN to sparse and curiosity-based intrinsic rewards. When the number of agents increases, ELIGN scales well in all multi-agent tasks except for one where agents have different capabilities. We show that agent coordination improves through expectation alignment because agents learn to divide tasks amongst themselves, break coordination symmetries, and confuse adversaries. These results identify tasks where expectation alignment is a more useful strategy than curiosity-driven exploration for multi-agent coordination, enabling agents to do zero-shot coordination.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源