论文标题

通过基于神经归因的攻击改善对抗性转移性

Improving Adversarial Transferability via Neuron Attribution-Based Attacks

论文作者

Zhang, Jianping, Wu, Weibin, Huang, Jen-tse, Huang, Yizhan, Wang, Wenxuan, Su, Yuxin, Lyu, Michael R.

论文摘要

已知深层神经网络(DNN)容易受到对抗性例子的影响。因此,必须先设计有效的攻击算法,以便事先确定DNN的缺陷。为了有效地解决目标模型的细节未知的黑框设置,基于特征级传输的攻击建议污染本地模型的中间特征输出,然后直接采用精心设计的对抗样本来攻击目标模型。由于功能的可传递性,功能级攻击在综合更可转移的对抗样本方面已显示出希望。但是,现有的特征级攻击通常采用不准确的神经元重要性估计,从而恶化了其可传递性。为了克服此类陷阱,在本文中,我们提出了基于神经元归因的攻击(NAA),该攻击(NAA)以更准确的神经元重要性估计进行功能级攻击。具体而言,我们首先将模型的输出完全归因于中间层中的每个神经元。然后,我们得出神经元归因的近似方案,以极大地减少开销的计算。最后,我们根据其归因结果加重神经元并启动功能级攻击。广泛的实验证实了我们对最先进基准测试的优势。

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. It is thus imperative to devise effective attack algorithms to identify the deficiencies of DNNs beforehand in security-sensitive applications. To efficiently tackle the black-box setting where the target model's particulars are unknown, feature-level transfer-based attacks propose to contaminate the intermediate feature outputs of local models, and then directly employ the crafted adversarial samples to attack the target model. Due to the transferability of features, feature-level attacks have shown promise in synthesizing more transferable adversarial samples. However, existing feature-level attacks generally employ inaccurate neuron importance estimations, which deteriorates their transferability. To overcome such pitfalls, in this paper, we propose the Neuron Attribution-based Attack (NAA), which conducts feature-level attacks with more accurate neuron importance estimations. Specifically, we first completely attribute a model's output to each neuron in a middle layer. We then derive an approximation scheme of neuron attribution to tremendously reduce the computation overhead. Finally, we weight neurons based on their attribution results and launch feature-level attacks. Extensive experiments confirm the superiority of our approach to the state-of-the-art benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源