论文标题
物理被动补丁对抗性攻击系统
Physical Passive Patch Adversarial Attacks on Visual Odometry Systems
论文作者
论文摘要
已知深层神经网络容易受到对抗性扰动的影响 - 较小的扰动会改变网络的输出并存在于严格的规范限制下。尽管通常将这种扰动讨论为针对特定输入量身定制的,但可以构建通用扰动以改变模型在一组输入上的输出。普遍的扰动提出了对抗性攻击的更现实的情况,因为不需要对模型的确切输入的认识。此外,通用攻击设置将概括的主题提高到看不见的数据,在给定一组输入的情况下,通用扰动旨在改变模型在样本外数据上的输出。在这项工作中,我们研究了基于视觉探测器的自主导航系统的物理被动补丁对抗攻击。视觉进程系统旨在推断两个相应的观点之间的相对摄像机运动,并经常被基于视觉的自主导航系统使用以估计其状态。对于此类导航系统,贴片对抗扰动构成了严重的安全问题,因为它可以用来误导系统到某些碰撞过程中。据我们所知,我们首次表明,通过在场景中部署补丁的对抗性攻击,可以显着增加视觉探针模型的误差。我们提供有关合成闭环无人机导航数据的评估,并证明实际数据中存在类似的漏洞。在https://github.com/patchadversarialattacks/patchardversarialateacks上提供了所提出的方法的参考实现和报告的实验。
Deep neural networks are known to be susceptible to adversarial perturbations -- small perturbations that alter the output of the network and exist under strict norm limitations. While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs. Universal perturbations present a more realistic case of adversarial attacks, as awareness of the model's exact input is not required. In addition, the universal attack setting raises the subject of generalization to unseen data, where given a set of inputs, the universal perturbations aim to alter the model's output on out-of-sample data. In this work, we study physical passive patch adversarial attacks on visual odometry-based autonomous navigation systems. A visual odometry system aims to infer the relative camera motion between two corresponding viewpoints, and is frequently used by vision-based autonomous navigation systems to estimate their state. For such navigation systems, a patch adversarial perturbation poses a severe security issue, as it can be used to mislead a system onto some collision course. To the best of our knowledge, we show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene. We provide evaluation on synthetic closed-loop drone navigation data and demonstrate that a comparable vulnerability exists in real data. A reference implementation of the proposed method and the reported experiments is provided at https://github.com/patchadversarialattacks/patchadversarialattacks.