论文标题

FEDBC:通过联合学习超出共识来校准全球和本地模型

FedBC: Calibrating Global and Local Models via Federated Learning Beyond Consensus

论文作者

Bedi, Amrit Singh, Fan, Chen, Koppel, Alec, Sahu, Anit Kumar, Sadler, Brian M., Huang, Furong, Manocha, Dinesh

论文摘要

在这项工作中,我们通过基于多准则优化的框架来定量校准联合学习中全球和本地模型的性能,我们将其作为一个受约束的程序进行。设备的目的是其本地目标,它试图最大程度地减少满足量化本地模型和全球模型之间接近性的非线性约束。通过考虑对这个问题的拉格朗日放松,我们开发了一种新颖的原始偶对偶的方法,称为联合学习超出共识(\ texttt {fedBC})。从理论上讲,我们确定\ texttt {fedBc}以与最新状态相匹配的速率收敛到一阶固定点,直到额外的误差项取决于引入的公差参数,以定位于多标准公式。最后,我们证明\ texttt {fedBC}在一组数据集(合成,MNIST,CIFAR-10,莎士比亚)套件上平衡了全局和局部模型测试精度指标,从而通过最新的出局来实现竞争性能。

In this work, we quantitatively calibrate the performance of global and local models in federated learning through a multi-criterion optimization-based framework, which we cast as a constrained program. The objective of a device is its local objective, which it seeks to minimize while satisfying nonlinear constraints that quantify the proximity between the local and the global model. By considering the Lagrangian relaxation of this problem, we develop a novel primal-dual method called Federated Learning Beyond Consensus (\texttt{FedBC}). Theoretically, we establish that \texttt{FedBC} converges to a first-order stationary point at rates that matches the state of the art, up to an additional error term that depends on a tolerance parameter introduced to scalarize the multi-criterion formulation. Finally, we demonstrate that \texttt{FedBC} balances the global and local model test accuracy metrics across a suite of datasets (Synthetic, MNIST, CIFAR-10, Shakespeare), achieving competitive performance with state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源