论文标题

使深果更虚假:通过透射攻击逃避深面伪造检测

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

论文作者

Liu, Chi, Chen, Huajie, Zhu, Tianqing, Zhang, Jun, Zhou, Wanlei

论文摘要

深击正在引起重大的社会关注。尽管已经作为法医对策开发了各种深泡检测器,但这些探测器仍然容易受到攻击。最近,一些攻击主要是对抗性攻击,成功地掩盖了深击图像以逃避检测。但是,这些攻击具有典型的探测器特异性设计,这些设计需要有关检测器的先验知识,从而导致可传递性差。此外,这些攻击仅考虑简单的安全方案。对于在探测器或攻击者知识变化的高级场景中,它们在高级场景中的有效性知之甚少。在本文中,我们通过提出新型的探测器 - 不足的痕量去除攻击,以解决上述挑战。我们的攻击没有调查探测器方面,而是研究了原始的深层创建管道,试图删除所有可检测到的天然深层痕迹,以使假图像更加“真实”。为了实施此攻击,首先,我们执行深层跟踪发现,并识别三个可辨别的痕迹。然后,基于涉及一个发电机和多个鉴别器的对抗学习框架提出了痕量删除网络(TR-NET)。每个判别器负责一个单独的痕量表示,以避免交叉跟踪干扰。这些鉴别器是并行排列的,这促使发电机同时删除各种迹线。为了评估攻击功效,我们精心制作了异质的安全场景,其中探测器嵌入了不同级别的防御水平,并且攻击者对数据的背景知识有所不同。实验结果表明,提出的攻击可以显着损害六个最先进的深击检测器的检测准确性,同时仅造成原始深层样品的视觉质量损失可忽略不计。

DeepFakes are raising significant social concerns. Although various DeepFake detectors have been developed as forensic countermeasures, these detectors are still vulnerable to attacks. Recently, a few attacks, principally adversarial attacks, have succeeded in cloaking DeepFake images to evade detection. However, these attacks have typical detector-specific designs, which require prior knowledge about the detector, leading to poor transferability. Moreover, these attacks only consider simple security scenarios. Less is known about how effective they are in high-level scenarios where either the detectors or the attacker's knowledge varies. In this paper, we solve the above challenges with presenting a novel detector-agnostic trace removal attack for DeepFake anti-forensics. Instead of investigating the detector side, our attack looks into the original DeepFake creation pipeline, attempting to remove all detectable natural DeepFake traces to render the fake images more "authentic". To implement this attack, first, we perform a DeepFake trace discovery, identifying three discernible traces. Then a trace removal network (TR-Net) is proposed based on an adversarial learning framework involving one generator and multiple discriminators. Each discriminator is responsible for one individual trace representation to avoid cross-trace interference. These discriminators are arranged in parallel, which prompts the generator to remove various traces simultaneously. To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies. The experimental results show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors while causing only a negligible loss in visual quality to the original DeepFake samples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源