论文标题

防御基于硬件的恶意软件探测器免受对抗攻击

Defending Hardware-based Malware Detectors against Adversarial Attacks

论文作者

Kuruvila, Abraham Peedikayil, Kundu, Shamik, Basu, Kanad

论文摘要

在物联网(IoT)时代,恶意软件在过去十年中一直在呈指数增长。传统的反病毒软件对现代复杂恶意软件无效。为了应对这一挑战,研究人员已使用硬件性能计数器(HPC)提出了硬件辅助恶意软件检测(HMD)。 HPC用于训练一组机器学习(ML)分类器,而该分类器又用于将良性程序与恶意软件区分开。最近,通过使用对抗性样本预测指标在HPC痕迹中引入扰动来设计对抗性攻击,以将特定HPC的程序错误分类。这些攻击的设计基本假设是攻击者知道用于检测恶意软件的HPC。由于现代处理器由数百个HPC组成,因此仅限制了其中一些用于恶意软件检测,这有助于攻击者。在本文中,我们通过设计多个在不同的HPC集中训练的ML分类器,为这种对抗性攻击提出了一个移动的目标防御(MTD)。 MTD随机选择分类器;因此,将攻击者混淆了HPC或应用的分类器数量。我们已经开发了一个分析模型,该模型证明了攻击者猜测MTD的理想HPC分类器组合的概率极低(对于具有20 hpcs的系统的$ 10^{ - 1864} $范围为$ 10^{ - 1864} $)。我们的实验结果证明,拟议的防御能够提高通过对抗样品生成器修改的HPC痕迹的分类准确性高达31.5%,以实现原始精度的接近完美(99.4%)。

In the era of Internet of Things (IoT), Malware has been proliferating exponentially over the past decade. Traditional anti-virus software are ineffective against modern complex Malware. In order to address this challenge, researchers have proposed Hardware-assisted Malware Detection (HMD) using Hardware Performance Counters (HPCs). The HPCs are used to train a set of Machine learning (ML) classifiers, which in turn, are used to distinguish benign programs from Malware. Recently, adversarial attacks have been designed by introducing perturbations in the HPC traces using an adversarial sample predictor to misclassify a program for specific HPCs. These attacks are designed with the basic assumption that the attacker is aware of the HPCs being used to detect Malware. Since modern processors consist of hundreds of HPCs, restricting to only a few of them for Malware detection aids the attacker. In this paper, we propose a Moving target defense (MTD) for this adversarial attack by designing multiple ML classifiers trained on different sets of HPCs. The MTD randomly selects a classifier; thus, confusing the attacker about the HPCs or the number of classifiers applied. We have developed an analytical model which proves that the probability of an attacker to guess the perfect HPC-classifier combination for MTD is extremely low (in the range of $10^{-1864}$ for a system with 20 HPCs). Our experimental results prove that the proposed defense is able to improve the classification accuracy of HPC traces that have been modified through an adversarial sample generator by up to 31.5%, for a near perfect (99.4%) restoration of the original accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源