论文标题

特洛伊木马游戏:一种拜占庭式拜占庭式方法

Game of Trojans: A Submodular Byzantine Approach

论文作者

Sahabandu, Dinuka, Rajabi, Arezoo, Niu, Luyao, Li, Bo, Ramasubramanian, Bhaskar, Poovendran, Radha

论文摘要

野外的机器学习模型已被证明在训练过程中容易受到特洛伊木马攻击的影响。尽管已经提出了许多检测机制,但已证明强大的适应性攻击者对他们有效。在本文中,我们旨在回答考虑一个智能和适应性对手的问题:(i)强有力的攻击者将木马所需的实例最少? (ii)这样的攻击者是否有可能绕过强大的检测机制? 我们提供了这种模型中发生的对抗和检测机制之间对抗性能力和战略相互作用的分析表征。我们根据输入数据集的分数来表征对手能力,该输入数据集的分数可以嵌入特洛伊木马触发器。我们表明,损耗函数具有一个下区结构,该结构导致设计有效的算法,以确定这一部分,并具有可证明的界限。我们提出了一种下区的特洛伊木马算法来确定样品的最小分数,以注入特洛伊木马扳机。为了逃避对木马模型的检测,我们将对手和特洛伊木马检测机制之间的战略相互作用建模为两人游戏。我们表明,对手以概率赢得了游戏,从而绕开了检测。我们通过证明特洛伊木马模型和干净模型的输出概率分布在遵循Min-Max(MM)Trojan算法时相同。 我们对MNIST,CIFAR-10和EUROSAT数据集进行了广泛的评估。结果表明,(i)使用superdular木马算法,对手需要将特洛伊木马扳机嵌入很少的样品中,以在Trojan和干净的样品上获得高精度,以及(ii)MM Trojan算法会产生训练有素的Trojan模型,以使检测具有比较性的检测。

Machine learning models in the wild have been shown to be vulnerable to Trojan attacks during training. Although many detection mechanisms have been proposed, strong adaptive attackers have been shown to be effective against them. In this paper, we aim to answer the questions considering an intelligent and adaptive adversary: (i) What is the minimal amount of instances required to be Trojaned by a strong attacker? and (ii) Is it possible for such an attacker to bypass strong detection mechanisms? We provide an analytical characterization of adversarial capability and strategic interactions between the adversary and detection mechanism that take place in such models. We characterize adversary capability in terms of the fraction of the input dataset that can be embedded with a Trojan trigger. We show that the loss function has a submodular structure, which leads to the design of computationally efficient algorithms to determine this fraction with provable bounds on optimality. We propose a Submodular Trojan algorithm to determine the minimal fraction of samples to inject a Trojan trigger. To evade detection of the Trojaned model, we model strategic interactions between the adversary and Trojan detection mechanism as a two-player game. We show that the adversary wins the game with probability one, thus bypassing detection. We establish this by proving that output probability distributions of a Trojan model and a clean model are identical when following the Min-Max (MM) Trojan algorithm. We perform extensive evaluations of our algorithms on MNIST, CIFAR-10, and EuroSAT datasets. The results show that (i) with Submodular Trojan algorithm, the adversary needs to embed a Trojan trigger into a very small fraction of samples to achieve high accuracy on both Trojan and clean samples, and (ii) the MM Trojan algorithm yields a trained Trojan model that evades detection with probability 1.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源