论文标题
最大异议:一种与国家依赖的方式一起在分布式凸优化中达成共识
Maximal Dissent: a State-Dependent Way to Agree in Distributed Convex Optimization
论文作者
论文摘要
在严格的通信限制下,考虑一组代理商协作解决分布式凸优化问题。在这种情况下,当一个代理被激活并被允许与其中一个邻居进行交流时,我们想选择拥有最有用的当地估计的人。我们提出了新的算法,其中具有最大异议平均值的药物的估计值,导致信息混合机制通常显示出与随机八卦相比,通常显示出更快的收敛到最佳解决方案。核心思想是,当两个局部估计之间的距离是网络平均状态中所有邻近代理之间最大的距离时,它导致最大的立即降低了二次lyapunov函数,用于建立与最佳解决方案集合的收敛。作为更广泛的贡献,我们使用统一的框架证明了最大差异亚级别方法的收敛性,该框架可用于其他依赖于状态的分布式优化算法。我们的证明技术绕过了在我们分析中使用的Lyapunov函数的收敛性能,在一个均匀长度的时间间隔内建立任何两个代理之间的信息流。
Consider a set of agents collaboratively solving a distributed convex optimization problem, asynchronously, under stringent communication constraints. In such situations, when an agent is activated and is allowed to communicate with only one of its neighbors, we would like to pick the one holding the most informative local estimate. We propose new algorithms where the agents with maximal dissent average their estimates, leading to an information mixing mechanism that often displays faster convergence to an optimal solution compared to randomized gossip. The core idea is that when two neighboring agents whose distance between local estimates is the largest among all neighboring agents in the network average their states, it leads to the largest possible immediate reduction of the quadratic Lyapunov function used to establish convergence to the set of optimal solutions. As a broader contribution, we prove the convergence of max-dissent subgradient methods using a unified framework that can be used for other state-dependent distributed optimization algorithms. Our proof technique bypasses the need of establishing the information flow between any two agents within a time interval of uniform length by intelligently studying convergence properties of the Lyapunov function used in our analysis.