论文标题

通过深度学习中的大规模MIMO分配下行链路功率:对抗性攻击和培训

Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial Attacks and Training

论文作者

Manoj, B. R., Sadeghi, Meysam, Larsson, Erik G.

论文摘要

无线系统应用中深度学习(DL)的成功出现引起了人们对新的与安全有关的挑战的担忧。这样的安全挑战是对抗性攻击。尽管已经有很多工作证明了基于DL的分类任务对对抗性攻击的敏感性,但是从攻击的角度来看,尚未对无线系统的基于回归的问题进行研究。本文的目的是双重的:(i)我们在无线设置中考虑了回归问题,并表明对抗性攻击可以打破基于DL的方法,并且(ii)我们分析了对抗性训练作为对抗环境中一种防御技术的有效性,并表明基于DL基于DL的无线系统的鲁棒性反对攻击的稳健性会得到显着改善。具体而言,本文考虑的无线应用程序是基于DL的功率分配,以多细胞大量多输入 - 销售输出系统的下行链路分配,攻击的目标是通过DL模型产生不可行的解决方案。我们扩展了基于梯度的对抗性攻击:快速梯度标志方法(FGSM),动量迭代FGSM和预计的梯度下降方法,以分析有或没有对抗性训练的考虑的无线应用的易感性。我们对这些攻击进行了分析深度神经网络(DNN)模型的性能,在这些攻击中,使用白色框和黑盒攻击制作了对抗性扰动。

The successful emergence of deep learning (DL) in wireless system applications has raised concerns about new security-related challenges. One such security challenge is adversarial attacks. Although there has been much work demonstrating the susceptibility of DL-based classification tasks to adversarial attacks, regression-based problems in the context of a wireless system have not been studied so far from an attack perspective. The aim of this paper is twofold: (i) we consider a regression problem in a wireless setting and show that adversarial attacks can break the DL-based approach and (ii) we analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly. Specifically, the wireless application considered in this paper is the DL-based power allocation in the downlink of a multicell massive multi-input-multi-output system, where the goal of the attack is to yield an infeasible solution by the DL model. We extend the gradient-based adversarial attacks: fast gradient sign method (FGSM), momentum iterative FGSM, and projected gradient descent method to analyze the susceptibility of the considered wireless application with and without adversarial training. We analyze the deep neural network (DNN) models performance against these attacks, where the adversarial perturbations are crafted using both the white-box and black-box attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源