论文标题
DIM-KRUM:NLP的后门耐药性联合学习,以尺寸为基础KRUM的聚合
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation
论文作者
论文摘要
尽管有联盟学习的潜力,但众所周知,它容易受到后门攻击的影响。提出了许多强大的联邦聚合方法,以降低潜在的后门风险。但是,它们主要在简历字段中进行验证。在本文中,我们发现NLP后门比CV难以防御,并且我们提供了理论分析,即恶意更新检测误差概率取决于相对后门的强度。 NLP攻击往往具有较小的相对后门强度,这可能会导致NLP攻击的强大联合聚合方法失败。受理论结果的启发,我们可以选择一些具有更高后门优势的维度来解决这个问题。我们提出了一种用于NLP任务的新型联合聚合算法,DIM-KRUM,实验结果验证了其有效性。
Despite the potential of federated learning, it is known to be vulnerable to backdoor attacks. Many robust federated aggregation methods are proposed to reduce the potential backdoor risk. However, they are mainly validated in the CV field. In this paper, we find that NLP backdoors are hard to defend against than CV, and we provide a theoretical analysis that the malicious update detection error probabilities are determined by the relative backdoor strengths. NLP attacks tend to have small relative backdoor strengths, which may result in the failure of robust federated aggregation methods for NLP attacks. Inspired by the theoretical results, we can choose some dimensions with higher backdoor strengths to settle this issue. We propose a novel federated aggregation algorithm, Dim-Krum, for NLP tasks, and experimental results validate its effectiveness.