论文标题

防御近似:使用近似计算确保CNN

Defensive Approximation: Securing CNNs using Approximate Computing

论文作者

Guesmi, Amira, Alouani, Ihsen, Khasawneh, Khaled, Baklouti, Mouna, Frikha, Tarek, Abid, Mohamed, Abu-Ghazaleh, Nael

论文摘要

在过去的几年中,越来越多的机器学习和深度学习结构(例如卷积神经网络(CNN))已应用于解决广泛的现实生活问题。但是,这些体系结构容易受到对抗攻击的影响。在本文中,我们首次提出使用硬件支持的近似计算来改善机器学习分类器的鲁棒性。我们表明,我们的近似计算实现实现了各种攻击方案的鲁棒性。具体来说,对于黑框和灰色盒子攻击方案,我们表明,针对确切分类器的对抗性攻击成功地传递了近似实现。令人惊讶的是,鲁棒性优势也适用于攻击者可以访问近似分类器的内部实现的白框攻击。我们通过分析近似实施的内部操作来解释这种鲁棒性的一些可能原因。此外,我们的近似计算模型在分类准确性方面保持了相同的水平,不需要重新培训,并降低了CNN的资源利用率和能源消耗。我们对一系列强烈的对抗攻击进行了广泛的实验。我们从经验上表明,拟议的实施将LENET-5和Alexnet CNN的鲁棒性提高了高达99%和87%,对于强灰盒对抗性攻击以及由于近似逻辑的简单性质而节省了高达67%的能源消耗。我们还表明,白盒攻击需要更高的噪声预算才能欺骗大概分类器,从而导致输入图像的PSNR平均相对于成功欺骗确切分类器的图像的PSNR降解

In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, for black-box and grey-box attack scenarios, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has access to the internal implementation of the approximate classifier. We explain some of the possible reasons for this robustness through analysis of the internal operation of the approximate implementation. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong grey-box adversarial attacks along with up to 67% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4db degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifier

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源