论文标题
对抗言语增强的对抗隐私保护
Adversarial Privacy Protection on Speech Enhancement
论文作者
论文摘要
语音很容易泄漏,例如在不同情况下由手机记录的语音。语音中的私人内容可能通过语音增强技术而恶意提取。语音增强技术与深度神经网络(DNN)一起迅速发展,但是对抗性示例可能会导致DNN失败。在这项工作中,我们提出了一种对抗性方法来降低语音增强系统。实验结果表明,生成的对抗性示例可以在原始示例中删除大多数内容信息,或者通过语音增强替换目标语音内容。在增强的原始示例和增强的对抗示例识别结果之间的单词错误率(WER)可以达到89.0%。在增强的对抗性示例和目标示例之间的目标攻击的情况低至33.75%。对抗性扰动可以将变更速率带到原始示例超过1.4430。这项工作可以防止恶意提取言语。
Speech is easily leaked imperceptibly, such as being recorded by mobile phones in different situations. Private content in speech may be maliciously extracted through speech enhancement technology. Speech enhancement technology has developed rapidly along with deep neural networks (DNNs), but adversarial examples can cause DNNs to fail. In this work, we propose an adversarial method to degrade speech enhancement systems. Experimental results show that generated adversarial examples can erase most content information in original examples or replace it with target speech content through speech enhancement. The word error rate (WER) between an enhanced original example and enhanced adversarial example recognition result can reach 89.0%. WER of target attack between enhanced adversarial example and target example is low to 33.75% . Adversarial perturbation can bring the rate of change to the original example to more than 1.4430. This work can prevent the malicious extraction of speech.