论文标题

使用样品算法的联合学习在等法下

Federated Learning with a Sampling Algorithm under Isoperimetry

论文作者

Sun, Lukang, Salim, Adil, Richtárik, Peter

论文摘要

联合学习使用一组技术来有效地在拥有培训数据的几种设备上分发机器学习算法的培训。这些技术非常依赖于降低设备和中央服务器之间的通信成本 - 主要瓶颈。联合学习算法通常采用优化方法:它们是最大程度地减少训练损失(以及其他)约束的算法。在这项工作中,我们采用贝叶斯的方法来完成训练任务,并提出了Langevin算法的沟通效率变体来采样后验。后一种方法比其优化对应物更强大,并提供了更多关于\ textit {a后验分布的知识。我们在不假设目标分布强烈的对数符号的情况下分析了算法。取而代之的是,我们假设较弱的日志Sobolev不等式,这允许非概念性。

Federated learning uses a set of techniques to efficiently distribute the training of a machine learning algorithm across several devices, who own the training data. These techniques critically rely on reducing the communication cost -- the main bottleneck -- between the devices and a central server. Federated learning algorithms usually take an optimization approach: they are algorithms for minimizing the training loss subject to communication (and other) constraints. In this work, we instead take a Bayesian approach for the training task, and propose a communication-efficient variant of the Langevin algorithm to sample a posteriori. The latter approach is more robust and provides more knowledge of the \textit{a posteriori} distribution than its optimization counterpart. We analyze our algorithm without assuming that the target distribution is strongly log-concave. Instead, we assume the weaker log Sobolev inequality, which allows for nonconvexity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源