论文标题
FEDSKIP:通过联合跳过聚合打击统计异质性
FedSkip: Combatting Statistical Heterogeneity with Federated Skip Aggregation
论文作者
论文摘要
本地客户中非独立且分布的(非IID)数据的统计异质性显着限制了联合学习的表现。以前的尝试,例如FedProx,脚手架,月亮,Fednova和Feddyn诉诸于优化的观点,该视角需要辅助术语或重新经过本地更新,以校准学习偏见或客观不一致。但是,除了以前的联邦平均探索改进外,我们的分析表明,另一个关键的瓶颈是在更异质条件下客户模型的最佳最佳选择。因此,我们引入了一种名为FEDSKIP的数据驱动方法,以通过定期跳过联合平均和散射本地模型到交叉设备来改善客户端的优势。我们提供了FedSkip可能受益的理论分析,并在一系列数据集上进行了广泛的实验,以证明FEDSKIP可以实现更高的准确性,更好的聚合效率和竞争性沟通效率。源代码可在以下网址获得:https://github.com/mediabrain-sjtu/fedskip。
The statistical heterogeneity of the non-independent and identically distributed (non-IID) data in local clients significantly limits the performance of federated learning. Previous attempts like FedProx, SCAFFOLD, MOON, FedNova and FedDyn resort to an optimization perspective, which requires an auxiliary term or re-weights local updates to calibrate the learning bias or the objective inconsistency. However, in addition to previous explorations for improvement in federated averaging, our analysis shows that another critical bottleneck is the poorer optima of client models in more heterogeneous conditions. We thus introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices. We provide theoretical analysis of the possible benefit from FedSkip and conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency. Source code is available at: https://github.com/MediaBrain-SJTU/FedSkip.