论文标题
通过通信分布式内核岭回归
Distributed Kernel Ridge Regression with Communications
论文作者
论文摘要
本文着重于学习理论框架中分布式算法的概括性能分析。例如,以分布式内核岭回归(DKRR)为例,我们成功地得出了其期望值中的最佳学习率,并提供了当地处理器数量的理论上最佳范围。由于理论和实验之间的差距,我们还推断了DKRR的最佳学习率,以实质上反映了DKRR的概括性能和局限性。此外,我们提出了一种交流策略,以提高DKRR的学习绩效,并通过理论评估和数值实验证明DKRR中的通信能力。
This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory. Taking distributed kernel ridge regression (DKRR) for example, we succeed in deriving its optimal learning rates in expectation and providing theoretically optimal ranges of the number of local processors. Due to the gap between theory and experiments, we also deduce optimal learning rates for DKRR in probability to essentially reflect the generalization performance and limitations of DKRR. Furthermore, we propose a communication strategy to improve the learning performance of DKRR and demonstrate the power of communications in DKRR via both theoretical assessments and numerical experiments.