论文标题

关于深度图像deblurring的对抗性鲁棒性

On Adversarial Robustness of Deep Image Deblurring

论文作者

Gandikota, Kanchana Vaishnavi, Chandramouli, Paramanand, Moeller, Michael

论文摘要

最近的方法采用基于深度学习的解决方案从其模糊观察中恢复了尖锐的图像。本文介绍了针对基于深度学习的图像去膨胀方法的对抗性攻击,并评估了这些神经网络的鲁棒性,以对未靶向和有针对性的攻击。我们证明,不可察觉的失真会大大降低最先进的去蓝链网络的性能,甚至在输出中产生截然不同的内容,这表明不仅需要在分类中,而且对图像恢复都包括对抗性良好的训练。

Recent approaches employ deep learning-based solutions for the recovery of a sharp image from its blurry observation. This paper introduces adversarial attacks against deep learning-based image deblurring methods and evaluates the robustness of these neural networks to untargeted and targeted attacks. We demonstrate that imperceptible distortion can significantly degrade the performance of state-of-the-art deblurring networks, even producing drastically different content in the output, indicating the strong need to include adversarially robust training not only in classification but also for image recovery.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源