论文标题

FedDM:与沟通有效的联合学习的迭代分配匹配

FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning

论文作者

Xiong, Yuanhao, Wang, Ruochen, Cheng, Minhao, Yu, Felix, Hsieh, Cho-Jui

论文摘要

联邦学习〜(FL)最近引起了学术界和行业的越来越多的关注,其最终目标是在隐私和沟通限制下进行协作培训。现有的基于基于FL算法的迭代模型需要大量的通信回合才能获得良好的模型,这是由于不同客户之间非常不平衡和非平衡的I.D数据分配。因此,我们建议FedDM从多个本地替代功能中构建全球培训目标,这使服务器能够对损失格局获得更全球的看法。详细说明,我们在每个客户端上构建了合成的数据集,以在本地匹配从原始数据到分布匹配的损失景观。与笨拙的模型权重相比,FedDM通过传输更多信息和较小的合成数据来降低通信回合并提高模型质量。我们对三个图像分类数据集进行了广泛的实验,结果表明,我们的方法在效率和模型性能方面可以优于其他FL的实验。此外,我们证明,可以对FedDM进行调整以通过高斯机制保护差异隐私,并在相同的隐私预算下训练更好的模型。

Federated learning~(FL) has recently attracted increasing attention from academia and industry, with the ultimate goal of achieving collaborative training under privacy and communication constraints. Existing iterative model averaging based FL algorithms require a large number of communication rounds to obtain a well-performed model due to extremely unbalanced and non-i.i.d data partitioning among different clients. Thus, we propose FedDM to build the global training objective from multiple local surrogate functions, which enables the server to gain a more global view of the loss landscape. In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data through distribution matching. FedDM reduces communication rounds and improves model quality by transmitting more informative and smaller synthesized data compared with unwieldy model weights. We conduct extensive experiments on three image classification datasets, and results show that our method can outperform other FL counterparts in terms of efficiency and model performance. Moreover, we demonstrate that FedDM can be adapted to preserve differential privacy with Gaussian mechanism and train a better model under the same privacy budget.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源