论文标题
具有随机量化的广义分布亚级别方法的收敛理论
Convergence Theory of Generalized Distributed Subgradient Method with Random Quantization
论文作者
论文摘要
分布式亚级别方法(DSG)是一种广泛讨论的算法,可应对在启动的机器学习应用中的大规模分布式优化问题。 DSG上的大多数易位作品都集中在合作代理之间的理想沟通上,使代理之间的共享信息是精确而完美的。但是,这种假设可能会导致潜在的隐私问题,并且当无线传输链接的质量不佳时,这是不可行的。为了克服挑战,一种常见的方法是在传输前对本地量化数据,从而避免了原始数据的暴露并大大减少了数据的大小。与完美的数据相比,量化对数据准确性的丢失构成了基本挑战,这进一步影响了算法的收敛性。为了解决问题,我们提出了一种具有随机量化的广义分布亚级别方法,可以将其作为两个时间尺度的随机近似方法进行静止。我们提供了有关算法的收敛性的全面结果,并根据量化位,步骤和网络代理的数量在收敛速率上得出上限。我们的结果扩展了现有结果,仅考虑特殊情况,并且缺少有关收敛率的一般结论。最后,在线性回归问题上进行了数值模拟,以支持我们的理论结果。
The distributed subgradient method (DSG) is a widely discussed algorithm to cope with large-scale distributed optimization problems in the arising machine learning applications. Most exisiting works on DSG focus on ideal communication between the cooperative agents such that the shared information between agents is exact and perfect. This assumption, however, could lead to potential privacy concerns and is not feasible when the wireless transmission links are not of good quality. To overcome the challenge, a common approach is to quantize the data locally before transmission, which avoids exposure of raw data and significantly reduces the size of data. Compared with perfect data, quantization poses fundamental challenges on loss of data accuracy, which further impacts the convergence of the algorithms. To settle the problem, we propose a generalized distributed subgradient method with random quantization, which can be intepreted as a two time-scale stochastic approximation method. We provide comprehensive results on the convergence of the algorithm and derive upper bounds on the convergence rates in terms of the quantization bit, stepsizes and the number of network agents. Our results extend the existing results, where only special cases are considered and general conclusions for the convergence rates are missing. Finally, numerical simulations are conducted on linear regression problems to support our theoretical results.