论文标题
Surfelgan:合成逼真的传感器数据以进行自动驾驶
SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving
论文作者
论文摘要
自主驾驶系统的开发在严重取决于模拟中复杂和多样化的交通情况的能力。在这种情况下,必须准确模拟摄像机,激光雷达或雷达等车辆传感器的能力。但是,当前的传感器模拟器利用游戏发动机(例如虚幻或统一),需要手动创建环境,对象和材料属性。这种方法的可扩展性有限,并且无法产生相机,激光雷达和雷达数据的现实近似,而没有大量额外的工作。 在本文中,我们提出了一种简单而有效的方法来生成现实的情景传感器数据,仅基于自动驾驶汽车收集的大量震动和摄像机数据。我们的方法使用纹理映射的表面来有效地从初始车辆通行证或一组通行证中有效地重建场景,从而保留有关对象3D几何和外观以及场景条件的丰富信息。然后,我们利用Surfelgan网络来重建现实的相机图像,以实现自动驾驶车辆的新位置和方向以及现场移动物体的方向。我们在Waymo打开数据集中演示了我们的方法,并证明它可以合成逼真的相机数据以进行模拟方案。我们还创建了一个新颖的数据集,其中包含两个自动驾驶车辆同时观察同一场景的情况。我们使用此数据集提供其他评估,并证明我们的Surfelgan模型的实用性。
Autonomous driving system development is critically dependent on the ability to replay complex and diverse traffic scenarios in simulation. In such scenarios, the ability to accurately simulate the vehicle sensors such as cameras, lidar or radar is essential. However, current sensor simulators leverage gaming engines such as Unreal or Unity, requiring manual creation of environments, objects and material properties. Such approaches have limited scalability and fail to produce realistic approximations of camera, lidar, and radar data without significant additional work. In this paper, we present a simple yet effective approach to generate realistic scenario sensor data, based only on a limited amount of lidar and camera data collected by an autonomous vehicle. Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes, preserving rich information about object 3D geometry and appearance, as well as the scene conditions. We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle and moving objects in the scene. We demonstrate our approach on the Waymo Open Dataset and show that it can synthesize realistic camera data for simulated scenarios. We also create a novel dataset that contains cases in which two self-driving vehicles observe the same scene at the same time. We use this dataset to provide additional evaluation and demonstrate the usefulness of our SurfelGAN model.