论文标题

使用可见光对面部识别的对抗性攻击

Adversarial Attack on Facial Recognition using Visible Light

论文作者

Frearson, Morgan, Nguyen, Kien

论文摘要

在监视行业中,将深度学习用于人类识别和对象检测变得越来越普遍。这些系统已经经过训练,可以识别具有高度准确性的人体或面孔。但是,已经成功地试图用称为对抗性攻击的不同技术欺骗这些系统。本文介绍了使用面部识别系统的可见光来实现对抗性攻击的最终报告。这项研究的相关性是利用深神经网络的身体垮台。希望这些系统内的弱点的证明是希望将来将使用这项研究来改善对象识别的培训模型。随着结果的收集,调整了项目目标以适合结果。因此,以下论文最初是在重新调整可见光攻击之前使用红外光探索对抗性攻击的。内部提出了有关红外光和面部识别的研究大纲。介绍了对该项目的当前发现和未来建议的详细分析。评估遇到的挑战,并提供最终解决方案。项目最终结果具有使用光有效愚弄识别系统的能力。

The use of deep learning for human identification and object detection is becoming ever more prevalent in the surveillance industry. These systems have been trained to identify human body's or faces with a high degree of accuracy. However, there have been successful attempts to fool these systems with different techniques called adversarial attacks. This paper presents a final report for an adversarial attack using visible light on facial recognition systems. The relevance of this research is to exploit the physical downfalls of deep neural networks. This demonstration of weakness within these systems are in hopes that this research will be used in the future to improve the training models for object recognition. As results were gathered the project objectives were adjusted to fit the outcomes. Because of this the following paper initially explores an adversarial attack using infrared light before readjusting to a visible light attack. A research outline on infrared light and facial recognition are presented within. A detailed analyzation of the current findings and possible future recommendations of the project are presented. The challenges encountered are evaluated and a final solution is delivered. The projects final outcome exhibits the ability to effectively fool recognition systems using light.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源