论文标题

FedAttack:通过硬采样的有效和秘密中毒对联邦建议的攻击

FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling

论文作者

Wu, Chuhan, Wu, Fangzhao, Qi, Tao, Huang, Yongfeng, Xie, Xing

论文摘要

联合学习(FL)是一种可行的技术,可以从分散的用户数据中学习个性化推荐模型。不幸的是,联邦推荐系统很容易受到恶意客户的中毒攻击。由于经济激励措施,现有的推荐系统中毒方法主要集中于促进目标项目的建议机会。实际上,在实际情况下,攻击者还可以尝试降低推荐系统的整体性能。但是,现有的用于降解模型性能的通用FL中毒方法要么无效或未隐藏在联邦推荐系统中的中毒。在本文中,我们提出了一种简单但有效且秘密的中毒攻击方法,该方法名为FedAttack。它的核心思想是使用全球最难的样本来颠覆模型训练。更具体地说,恶意客户端首先根据本地用户配置文件推断用户嵌入。接下来,他们选择与用户嵌入最相关的候选项目作为最难的负样本,并将候选者与用户嵌入最远的候选者视为最难的阳性样本。然后将这些中毒样品推断的模型梯度上传到服务器以进行聚合和模型更新。由于恶意客户端的行为与具有不同兴趣的用户有些相似,因此无法通过服务器与普通客户端区分开来。在两个基准数据集上进行的广泛实验表明,FedAttack可以有效地降低各种联合推荐系统的性能,同时无法通过许多现有方法有效地检测或捍卫。

Federated learning (FL) is a feasible technique to learn personalized recommendation models from decentralized user data. Unfortunately, federated recommender systems are vulnerable to poisoning attacks by malicious clients. Existing recommender system poisoning methods mainly focus on promoting the recommendation chances of target items due to financial incentives. In fact, in real-world scenarios, the attacker may also attempt to degrade the overall performance of recommender systems. However, existing general FL poisoning methods for degrading model performance are either ineffective or not concealed in poisoning federated recommender systems. In this paper, we propose a simple yet effective and covert poisoning attack method on federated recommendation, named FedAttack. Its core idea is using globally hardest samples to subvert model training. More specifically, the malicious clients first infer user embeddings based on local user profiles. Next, they choose the candidate items that are most relevant to the user embeddings as hardest negative samples, and find the candidates farthest from the user embeddings as hardest positive samples. The model gradients inferred from these poisoned samples are then uploaded to the server for aggregation and model update. Since the behaviors of malicious clients are somewhat similar to users with diverse interests, they cannot be effectively distinguished from normal clients by the server. Extensive experiments on two benchmark datasets show that FedAttack can effectively degrade the performance of various federated recommender systems, meanwhile cannot be effectively detected nor defended by many existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源