论文标题

在个性化的联合和协作学习中,样本最佳性和所有策略

Sample Optimality and All-for-all Strategies in Personalized Federated and Collaborative Learning

论文作者

Even, Mathieu, Massoulié, Laurent, Scaman, Kevin

论文摘要

在个性化的联合学习中,潜在的大型代理商的每个成员都旨在训练模型最小化其本地数据分布的损失功能的模型。我们在随机优化的镜头下研究了这个问题。具体而言,我们介绍了所有代理所需的样本数量以近似固定药物的概括误差所需的样本数量。然后,我们提供符合这些下限的策略,在全部和全部设置中,分别有一个或所有代理希望最大程度地减少其本地功能。我们的策略基于一种梯度过滤方法:提供有关本地数据分布或功能之间一些距离或差异的先验知识,给定的代理过滤器和从其他代理接收的随机梯度,以实现最佳的偏见不利权衡。

In personalized Federated Learning, each member of a potentially large set of agents aims to train a model minimizing its loss function averaged over its local data distribution. We study this problem under the lens of stochastic optimization. Specifically, we introduce information-theoretic lower bounds on the number of samples required from all agents to approximately minimize the generalization error of a fixed agent. We then provide strategies matching these lower bounds, in the all-for-one and all-for-all settings where respectively one or all agents desire to minimize their own local function. Our strategies are based on a gradient filtering approach: provided prior knowledge on some notions of distances or discrepancies between local data distributions or functions, a given agent filters and aggregates stochastic gradients received from other agents, in order to achieve an optimal bias-variance trade-off.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源