论文标题

抗拒音频对抗示例

Towards Resistant Audio Adversarial Examples

论文作者

Dörr, Tom, Markert, Karla, Müller, Nicolas M., Böttinger, Konstantin

论文摘要

对抗性示例极大地威胁着基于机器学习的系统的可用性和完整性。尽管在图像处理领域首先观察到了这种攻击的可行性,但最近的研究表明,语音识别也容易受到对抗性攻击的影响。但是,迄今为止,可靠地弥合了气隙(即,在通过麦克风记录时使对抗性示例起作用)已经避开了研究人员。我们发现,由于生成过程中的缺陷,由于目标语音识别系统中的binning操作(例如Mozilla DeepSpeech),因此最先进的对抗示例生成方法会导致过度拟合。我们设计了一种减轻这种缺陷的方法,发现我们的方法改善了具有不同偏移的对抗性示例的产生。我们通过在现实的空中环境中对编辑距离的经验比较来证实我们的方法的重大改进。我们的方法指出了迈向空中攻击的重要一步。我们发布代码和适用的方法。

Adversarial examples tremendously threaten the availability and integrity of machine learning-based systems. While the feasibility of such attacks has been observed first in the domain of image processing, recent research shows that speech recognition is also susceptible to adversarial attacks. However, reliably bridging the air gap (i.e., making the adversarial examples work when recorded via a microphone) has so far eluded researchers. We find that due to flaws in the generation process, state-of-the-art adversarial example generation methods cause overfitting because of the binning operation in the target speech recognition system (e.g., Mozilla Deepspeech). We devise an approach to mitigate this flaw and find that our method improves generation of adversarial examples with varying offsets. We confirm the significant improvement with our approach by empirical comparison of the edit distance in a realistic over-the-air setting. Our approach states a significant step towards over-the-air attacks. We publish the code and an applicable implementation of our approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源