论文标题
联合学习汇总:保证的新鲁棒算法
Federated Learning Aggregation: New Robust Algorithms with Guarantees
论文作者
论文摘要
最近已经提出了联合学习,以在边缘进行分布式模型培训。这种方法的原则是汇总在分布式客户端学习的模型,以获得新的更一般的“平均”模型(FedAvg)。然后将最终的模型重新分配给客户以进行进一步培训。迄今为止,最受欢迎的联合学习算法使用模型参数的平均进行聚合。在本文中,我们进行了完整的一般数学收敛分析,以评估联合学习框架中的聚合策略。由此,我们得出了新颖的聚合算法,这些算法能够根据其损失的价值来区分客户贡献来修改其模型架构。此外,我们超越了理论中介绍的假设,通过评估这些策略的性能,并将其与IID和非IID框架中的分类任务中的一个分类任务进行比较而没有其他假设。
Federated Learning has been recently proposed for distributed model training at the edge. The principle of this approach is to aggregate models learned on distributed clients to obtain a new more general "average" model (FedAvg). The resulting model is then redistributed to clients for further training. To date, the most popular federated learning algorithm uses coordinate-wise averaging of the model parameters for aggregation. In this paper, we carry out a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework. From this, we derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses. Moreover, we go beyond the assumptions introduced in theory, by evaluating the performance of these strategies and by comparing them with the one of FedAvg in classification tasks in both the IID and the Non-IID framework without additional hypothesis.