论文标题

明智的攻击:可转移的对抗示例,用于锚定对象检测

Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection

论文作者

Liao, Quanyu, Wang, Xin, Kong, Bin, Lyu, Siwei, Yin, Youbing, Song, Qi, Wu, Xi

论文摘要

已经证明深度神经网络容易受到对抗性攻击的影响:微妙的扰动可以完全改变分类结果。他们的脆弱性导致了这一方向的研究激增。但是,大多数致力于攻击基于锚的对象检测模型的作品。在这项工作中,我们旨在提出一种有效而有效的算法,以生成对抗性示例,以基于两种方法来攻击无锚对象模型。首先,我们对对象检测器进行类别而不是实例攻击。其次,我们利用高级语义信息来生成对抗性示例。令人惊讶的是,生成的对抗性示例不仅能够有效地攻击目标无锚的对象检测器,而且还可以转移到攻击其他对象检测器,甚至可以转移到其他基于锚的对象探测器(例如基于锚的探测器),例如更快的R-CNN。

Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbations can completely change the classification results. Their vulnerability has led to a surge of research in this direction. However, most works dedicated to attacking anchor-based object detection models. In this work, we aim to present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models based on two approaches. First, we conduct category-wise instead of instance-wise attacks on the object detectors. Second, we leverage the high-level semantic information to generate the adversarial examples. Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源