论文标题

反向传播线性可提高对抗性示例的可传递性

Backpropagating Linearly Improves Transferability of Adversarial Examples

论文作者

Guo, Yiwen, Li, Qizhang, Chen, Hao

论文摘要

深度神经网络(DNN)对对抗性例子的脆弱性引起了社区的极大关注。在本文中,我们研究了此类示例的可传递性,这奠定了许多黑盒对DNN的攻击的基础。我们重新审视了Goodfellow等人的并不是新事物,但绝对值得注意的假设,并透露可以通过以适当的方式提高DNN的线性来增强可传递性。我们引入了线性反向传播(LINBP),该方法使用利用梯度的现成攻击以更线性的方式执行反向传播。更具体地说,它可以正常地计算出向前的损失,就好像在正向通行中未遇到某些非线性激活一样。实验结果表明,这种简单而有效的方法显然优于在CIFAR-10和Imagenet上制定可转移的对抗示例的当前最新方法,从而导致对各种DNN的攻击更有效。

The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis of Goodfellow et al.'s and disclose that the transferability can be enhanced by improving the linearity of DNNs in an appropriate manner. We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. More specifically, it calculates forward as normal but backpropagates loss as if some nonlinear activations are not encountered in the forward pass. Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源