论文标题
Mudguard:使用隐私保护的拜占庭式群集在联邦学习中驯服多数族裔
MUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clustering
论文作者
论文摘要
拜占庭式联合学习(FL)旨在对抗恶意客户并培训准确的全球模型,同时保持极低的攻击成功率。但是,大多数现有系统只有在大多数客户诚实时才是强大的。 fltrust(NDSS '21)和Zeno ++(ICML '20)没有做出如此诚实的多数假设,而只能应用于为服务器提供用于过滤恶意更新的辅助数据集的方案。火焰(USENIX '22)和EIFFEL(CCS '22)保持半备受的多数假设,以确保鲁棒性和更新的机密性。因此,目前不可能在不承担半盛宴多数的情况下确保更新的拜占庭稳定性和机密性。为了解决这个问题,我们提出了一种新颖的拜占庭式by-bust和隐私保护系统,称为Mudguard,该系统可以在服务器和客户端的恶意少数民族\ emph {或多数派}下运作。基于DBSCAN,我们设计了一种通过成对调整的余弦相似性从模型更新中提取功能的新方法,以提高所得聚类的准确性。为了阻止多数攻击恶意的攻击,我们开发了一种称为\ textit {model分割}的方法,该方法仅将群集内的更新汇总在一起,仅将相应的模型发送给相应群集的客户端。基本的想法是,即使恶意客户占多数,他们的中毒更新也不会损害良性客户,如果他们仅限于恶意集群中。我们还利用多种密码工具进行聚类,而无需牺牲培训正确性并更新机密性。我们提出了详细的安全证明和经验评估,以及对Mudguard的收敛分析。
Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate. Most existing systems, however, are only robust when most of the clients are honest. FLTrust (NDSS '21) and Zeno++ (ICML '20) do not make such an honest majority assumption but can only be applied to scenarios where the server is provided with an auxiliary dataset used to filter malicious updates. FLAME (USENIX '22) and EIFFeL (CCS '22) maintain the semi-honest majority assumption to guarantee robustness and the confidentiality of updates. It is therefore currently impossible to ensure Byzantine robustness and confidentiality of updates without assuming a semi-honest majority. To tackle this problem, we propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority \emph{or majority} in both the server and client sides. Based on DBSCAN, we design a new method for extracting features from model updates via pairwise adjusted cosine similarity to boost the accuracy of the resulting clustering. To thwart attacks from a malicious majority, we develop a method called \textit{Model Segmentation}, that aggregates together only the updates from within a cluster, sending the corresponding model only to the clients of the corresponding cluster. The fundamental idea is that even if malicious clients are in their majority, their poisoned updates cannot harm benign clients if they are confined only within the malicious cluster. We also leverage multiple cryptographic tools to conduct clustering without sacrificing training correctness and updates confidentiality. We present a detailed security proof and empirical evaluation along with a convergence analysis for MUDGUARD.