论文标题
对物体检测系统的中间人攻击
A Human-in-the-Middle Attack against Object Detection Systems
论文作者
论文摘要
由于CPU和GPU在嵌入式系统中的能力上升,使用深度学习模型的对象检测系统在机器人技术中变得越来越流行。但是,这些模型容易受到对抗攻击的影响。尽管某些攻击受到对检测系统访问的严格假设的限制,但我们提出了一种新型的硬件攻击,灵感来自密码学中的中间攻击。此攻击会产生通用的对抗扰动(UAP),并通过硬件攻击向USB摄像机和检测系统注入扰动。此外,先前的研究被评估指标误导,该评估指标衡量模型的准确性而不是攻击性能。结合我们提出的评估指标,我们大大提高了对抗性扰动的强度。这些发现引起了对安全至关重要系统(例如自动驾驶)中深度学习模型的应用的严重关注。
Object detection systems using deep learning models have become increasingly popular in robotics thanks to the rising power of CPUs and GPUs in embedded systems. However, these models are susceptible to adversarial attacks. While some attacks are limited by strict assumptions on access to the detection system, we propose a novel hardware attack inspired by Man-in-the-Middle attacks in cryptography. This attack generates a Universal Adversarial Perturbations (UAP) and injects the perturbation between the USB camera and the detection system via a hardware attack. Besides, prior research is misled by an evaluation metric that measures the model accuracy rather than the attack performance. In combination with our proposed evaluation metrics, we significantly increased the strength of adversarial perturbations. These findings raise serious concerns for applications of deep learning models in safety-critical systems, such as autonomous driving.