论文标题
通过神经辐射领域的增强学习
Reinforcement Learning with Neural Radiance Fields
论文作者
论文摘要
寻找有效的培训加强学习(RL)代理的有效表示是一个长期存在的问题。本文表明,与其他学习的表示形式甚至低维度,手工设计的状态信息相比,通过神经辐射场(NERF)进行监督的学习状态表示可以提高RL的性能。具体来说,我们建议训练一个编码器,该编码器将多个图像观测值映射到描述场景中对象的潜在空间。由潜在条件的NERF构建的解码器是学习潜在空间的监督信号。然后,RL算法在学习的潜在空间作为状态表示。我们称此NERF-RL。我们的实验表明,NERF作为监督会导致一个更适合于涉及机器人对象操作的下游RL任务,例如将杯子悬挂在钩子上,推动对象或打开门。视频:https://dannydriess.github.io/nerf-rl
It is a long-standing problem to find effective representations for training reinforcement learning (RL) agents. This paper demonstrates that learning state representations with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information. Specifically, we propose to train an encoder that maps multiple image observations to a latent space describing the objects in the scene. The decoder built from a latent-conditioned NeRF serves as the supervision signal to learn the latent space. An RL algorithm then operates on the learned latent space as its state representation. We call this NeRF-RL. Our experiments indicate that NeRF as supervision leads to a latent space better suited for the downstream RL tasks involving robotic object manipulations like hanging mugs on hooks, pushing objects, or opening doors. Video: https://dannydriess.github.io/nerf-rl