论文标题

雨子对象检测的物理上可实现的对抗示例

Physically Realizable Adversarial Examples for LiDAR Object Detection

论文作者

Tu, James, Ren, Mengye, Manivasagam, Siva, Liang, Ming, Yang, Bin, Du, Richard, Cheng, Frank, Urtasun, Raquel

论文摘要

现代自主驾驶系统在很大程度上依赖深度学习模型来处理点云感觉数据。同时,已经证明深层模型容易受到视觉上不可察觉扰动的对抗攻击。尽管这对自动驾驶行业构成了安全问题,但在3D知觉方面,探索很少,因为大多数对抗性攻击仅应用于2D平面图像。在本文中,我们解决了这个问题,并提出了一种生成通用3D对抗对象以愚弄激光雷达探测器的方法。特别是,我们证明,将对抗物体放在任何目标车辆的屋顶上,将车辆完全隐藏在成功率为80%的雷达探测器上。我们使用点云的各种输入表示对一组检测器报告攻击结果。我们还使用数据增强进行了一项有关对抗防御的试点研究。这是在有限的培训数据中看不见的条件下更安全的自动驾驶的一步。

Modern autonomous driving systems rely heavily on deep learning models to process point cloud sensory data; meanwhile, deep models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations. Despite the fact that this poses a security concern for the self-driving industry, there has been very little exploration in terms of 3D perception, as most adversarial attacks have only been applied to 2D flat images. In this paper, we address this issue and present a method to generate universal 3D adversarial objects to fool LiDAR detectors. In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%. We report attack results on a suite of detectors using various input representation of point clouds. We also conduct a pilot study on adversarial defense using data augmentation. This is one step closer towards safer self-driving under unseen conditions from limited training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源