论文标题

关于通过梯度塑造缓解数据中毒攻击的有效性

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

论文作者

Hong, Sanghyun, Chandrasekaran, Varun, Kaya, Yiğitcan, Dumitraş, Tudor, Papernot, Nicolas

论文摘要

机器学习算法容易受到数据中毒攻击的影响。侧重于特定方案的先前分类法,例如不加选择或有针对性的分类学已经为相应的已知攻击子集提供了防御措施。然而,这引入了对手和后卫之间不可避免的军备竞赛。在这项工作中,我们研究了依靠所有中毒攻击共同的人工制品的攻击敏捷防御的可行性。具体来说,我们专注于所有攻击之间的共同元素:它们修改了计算以训练模型的梯度。我们确定在存在毒药存在下计算的梯度的两个主要伪像:(1)它们的$ \ ell_2 $规范的幅度明显高于干净梯度的幅度,并且(2)它们的方向与干净的梯度不同。基于这些观察结果,我们提出了一种通用中毒防御的先决条件:它必须绑定梯度的大小并最大程度地减少方向差异。我们称这种梯度形状。作为评估梯度塑造可行性的示例性工具,我们使用差异化的私有随机梯度下降(DP-SGD),在培训过程中剪辑和捕捉单个梯度以获得隐私保证。我们发现,即使在无法获得有意义的隐私保证的配置中,DP-SGD也会增加模型对不加区分的攻击的稳健性。它还减轻了最坏的目标攻击,并增加了对手在多毒的场景中的成本。我们发现DP-SGD对反对无效的唯一攻击是强烈但不现实的不加区分的攻击。我们的结果表明,尽管我们目前缺乏通用的中毒防御,但梯度塑造是未来研究的有希望的方向。

Machine learning algorithms are vulnerable to data poisoning attacks. Prior taxonomies that focus on specific scenarios, e.g., indiscriminate or targeted, have enabled defenses for the corresponding subset of known attacks. Yet, this introduces an inevitable arms race between adversaries and defenders. In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks. Specifically, we focus on a common element between all attacks: they modify gradients computed to train the model. We identify two main artifacts of gradients computed in the presence of poison: (1) their $\ell_2$ norms have significantly higher magnitudes than those of clean gradients, and (2) their orientation differs from clean gradients. Based on these observations, we propose the prerequisite for a generic poisoning defense: it must bound gradient magnitudes and minimize differences in orientation. We call this gradient shaping. As an exemplar tool to evaluate the feasibility of gradient shaping, we use differentially private stochastic gradient descent (DP-SGD), which clips and perturbs individual gradients during training to obtain privacy guarantees. We find that DP-SGD, even in configurations that do not result in meaningful privacy guarantees, increases the model's robustness to indiscriminate attacks. It also mitigates worst-case targeted attacks and increases the adversary's cost in multi-poison scenarios. The only attack we find DP-SGD to be ineffective against is a strong, yet unrealistic, indiscriminate attack. Our results suggest that, while we currently lack a generic poisoning defense, gradient shaping is a promising direction for future research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源