论文标题
FEDMR:通过模型重组的FedReat学习
FedMR: Fedreated Learning via Model Recombination
论文作者
论文摘要
作为一种有前途的隐私机器学习方法,联合学习(FL)可以使客户跨客户培训,而不会损害其机密的本地数据。但是,现有的FL方法遭受了不均分布数据的推理性能低的问题,因为它们中的大多数依赖于联合平均(FIDAVG)基于联合的聚合。通过以粗略的方式平均模型参数,FedAvg将局部模型的个体特征黯然失色,这极大地限制了FL的推理能力。更糟糕的是,在每一轮FL培训中,FedAvg向客户派遣了相同的初始本地模型,这很容易导致对最佳全局模型的局限性搜索。为了解决上述问题,本文提出了一种新颖有效的FL范式,名为FEDMR(联合模型重组)。与传统的基于FedAvg的方法不同,FEDMR的云服务器将收集到的本地模型的每一层层混合,并重新组合它们,以实现新的模型,以供客户对客户的本地培训。由于在每场FL比赛中进行了细粒度的模型重组和本地培训,FEDMR可以快速为所有客户找出一个全球最佳模型。全面的实验结果表明,与最先进的FL方法相比,FEDMR可以显着提高推理的准确性而不会引起额外的通信开销。
As a promising privacy-preserving machine learning method, Federated Learning (FL) enables global model training across clients without compromising their confidential local data. However, existing FL methods suffer from the problem of low inference performance for unevenly distributed data, since most of them rely on Federated Averaging (FedAvg)-based aggregation. By averaging model parameters in a coarse manner, FedAvg eclipses the individual characteristics of local models, which strongly limits the inference capability of FL. Worse still, in each round of FL training, FedAvg dispatches the same initial local models to clients, which can easily result in stuck-at-local-search for optimal global models. To address the above issues, this paper proposes a novel and effective FL paradigm named FedMR (Federating Model Recombination). Unlike conventional FedAvg-based methods, the cloud server of FedMR shuffles each layer of collected local models and recombines them to achieve new models for local training on clients. Due to the fine-grained model recombination and local training in each FL round, FedMR can quickly figure out one globally optimal model for all the clients. Comprehensive experimental results demonstrate that, compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without causing extra communication overhead.