论文标题

advbox:一个生成对抗性示例的工具箱,欺骗神经网络

Advbox: a toolbox to generate adversarial examples that fool neural networks

论文作者

Goodman, Dou, Xin, Hao, Yang, Wang, Yuesheng, Wu, Junfeng, Xiong, Huan, Zhang

论文摘要

近年来,神经网络已被广泛部署到计算机视觉任务,尤其是视觉分类问题,其中据报道新算法可以实现甚至超过人类的性能。最近的研究表明,它们都容易受到对抗例子的攻击。对输入图像的小且经常无法察觉的扰动足以欺骗最强大的神经网络。 \ emph {advbox}是一个工具箱,用于生成对抗性示例,在paddlepaddle,pytorch,caffe2,mxnet,keras,tensorflow中欺骗神经网络,并且可以基于机器学习模型的鲁棒性。与以前的工作相比,我们的平台支持对机器学习的黑匣子攻击,以及更多的攻击场景,例如面部识别攻击,隐形T恤和Deepfake Face Facterect。该代码在Apache 2.0下获得许可,并在https://github.com/advboxes/advbox上公开获得。 Advbox现在支持Python 3。

In recent years, neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. Recent studies have shown that they are all vulnerable to the attack of adversarial examples. Small and often imperceptible perturbations to the input images are sufficient to fool the most powerful neural networks. \emph{Advbox} is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, TensorFlow and it can benchmark the robustness of machine learning models. Compared to previous work, our platform supports black box attacks on Machine-Learning-as-a-service, as well as more attack scenarios, such as Face Recognition Attack, Stealth T-shirt, and DeepFake Face Detect. The code is licensed under the Apache 2.0 and is openly available at https://github.com/advboxes/AdvBox. Advbox now supports Python 3.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源