论文标题
沟通企业有效的安全汇总用于联合学习
Communication-Computation Efficient Secure Aggregation for Federated Learning
论文作者
论文摘要
联合学习已被焦点作为使用分布式数据训练神经网络的一种方式,而无需单个节点来共享数据。不幸的是,也已经表明,对手可能能够从联邦学习期间传输的模型参数提取本地数据内容。最新的解决方案基于安全汇总启用了隐私的联合学习,但以大量额外的沟通/计算资源为代价。在本文中,我们提出了一种低复杂性方案,该方案使用与现有安全解决方案相对于现有安全解决方案的通信/计算资源可提供数据隐私。建议方案背后的关键思想是将秘密共享节点的拓扑设计为稀疏的随机图,而不是与现有解决方案相对应的完整图。我们首先在图表上获得必要和充分的条件,以保证可靠性和隐私。然后,我们建议使用ERDőS-Rényi图,并提供有关拟议方案的可靠性/隐私性的理论保证。通过广泛的现实实验,我们证明了我们的计划仅使用$ 20 \ sim 30 \%$的常规方案资源,在实用联合学习系统中几乎具有相同级别的可靠性和数据隐私。
Federated learning has been spotlighted as a way to train neural networks using distributed data with no need for individual nodes to share data. Unfortunately, it has also been shown that adversaries may be able to extract local data contents off model parameters transmitted during federated learning. A recent solution based on the secure aggregation primitive enabled privacy-preserving federated learning, but at the expense of significant extra communication/computational resources. In this paper, we propose a low-complexity scheme that provides data privacy using substantially reduced communication/computational resources relative to the existing secure solution. The key idea behind the suggested scheme is to design the topology of secret-sharing nodes as a sparse random graph instead of the complete graph corresponding to the existing solution. We first obtain the necessary and sufficient condition on the graph to guarantee both reliability and privacy. We then suggest using the Erdős-Rényi graph in particular and provide theoretical guarantees on the reliability/privacy of the proposed scheme. Through extensive real-world experiments, we demonstrate that our scheme, using only $20 \sim 30\%$ of the resources required in the conventional scheme, maintains virtually the same levels of reliability and data privacy in practical federated learning systems.