论文标题
重新思考对抗性示例以保护位置隐私保护
Rethinking Adversarial Examples for Location Privacy Protection
论文作者
论文摘要
我们已经调查了对抗性示例的新应用,即针对地标识别系统的位置隐私保护。我们介绍了面具引导的多模式投影梯度下降(MM-PGD),其中对抗性示例在不同的深层模型上进行了培训。图像内容通过分析区域的特性来保护图像内容,以识别最适合在对抗示例中混合的区域。我们研究了两种区域识别策略:基于类激活图的MM-PGD,其中训练有素的深层模型的内部行为是针对的;和基于人视觉的MM-PGD,其中引起人类关注较少的地区的目标是针对的。 Ploce365数据集的实验表明,这些策略可能有效地防御黑盒里标识别系统,而无需进行大量图像操纵。
We have investigated a new application of adversarial examples, namely location privacy protection against landmark recognition systems. We introduce mask-guided multimodal projected gradient descent (MM-PGD), in which adversarial examples are trained on different deep models. Image contents are protected by analyzing the properties of regions to identify the ones most suitable for blending in adversarial examples. We investigated two region identification strategies: class activation map-based MM-PGD, in which the internal behaviors of trained deep models are targeted; and human-vision-based MM-PGD, in which regions that attract less human attention are targeted. Experiments on the Places365 dataset demonstrated that these strategies are potentially effective in defending against black-box landmark recognition systems without the need for much image manipulation.