论文标题
Q-TART:快速培训对抗性鲁棒性和不可转移性
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
论文作者
论文摘要
原始的深神经网络(DNN)的性能还不够。在现实世界中,计算负载,训练效率和对抗性安全性同样重要或更重要。我们建议使用我们提出的算法Q-Tart同时解决性能,效率和鲁棒性,以迅速训练具有对抗性的鲁棒性和可转移性。 Q-TART遵循这样的直觉,即样本非常容易受到噪声的影响强烈影响DNN所学到的决策界限,这反过来又降低了其性能和对抗性的敏感性。通过识别和删除此类样本,我们在仅使用训练数据的一部分时证明了性能和对抗性鲁棒性的提高。通过我们的实验,我们强调了Q-TART在包括Imagenet在内的多个数据集-DNN组合中的高性能,并提供了Q-Tart的互补行为以及现有的对抗性训练方法的互补行为,以使鲁棒性增加1.3%以上,同时减少17.9%的培训时间。
Raw deep neural network (DNN) performance is not enough; in real-world settings, computational load, training efficiency and adversarial security are just as or even more important. We propose to simultaneously tackle Performance, Efficiency, and Robustness, using our proposed algorithm Q-TART, Quickly Train for Adversarial Robustness and in-Transferability. Q-TART follows the intuition that samples highly susceptible to noise strongly affect the decision boundaries learned by DNNs, which in turn degrades their performance and adversarial susceptibility. By identifying and removing such samples, we demonstrate improved performance and adversarial robustness while using only a subset of the training data. Through our experiments we highlight Q-TART's high performance across multiple Dataset-DNN combinations, including ImageNet, and provide insights into the complementary behavior of Q-TART alongside existing adversarial training approaches to increase robustness by over 1.3% while using up to 17.9% less training time.