论文标题
二阶保证联邦学习
Second-Order Guarantees in Federated Learning
论文作者
论文摘要
联合学习是在异质性,异步和隐私方面实际考虑分布式数据中集中学习的有用框架。联合体系结构经常被部署在深度学习设置中,这通常会导致非凸优化问题。然而,尽管事实是,众所周知,大多数现有分析要么仅限于凸损失函数,要么仅建立一阶平稳性,尽管是一阶固定的鞍点在深度学习中构成瓶颈。我们借鉴了集中式和分散设置中随机梯度算法的二阶最优结果的最新结果,并为一类联合学习算法建立了二阶保证。
Federated learning is a useful framework for centralized learning from distributed data under practical considerations of heterogeneity, asynchrony, and privacy. Federated architectures are frequently deployed in deep learning settings, which generally give rise to non-convex optimization problems. Nevertheless, most existing analysis are either limited to convex loss functions, or only establish first-order stationarity, despite the fact that saddle-points, which are first-order stationary, are known to pose bottlenecks in deep learning. We draw on recent results on the second-order optimality of stochastic gradient algorithms in centralized and decentralized settings, and establish second-order guarantees for a class of federated learning algorithms.