论文标题
通过聚合变量分布式在线凸优化
Distributed Online Convex Optimization with an Aggregative Variable
论文作者
论文摘要
本文研究了在没有任何全球/中央协调器的聚合变量的情况下,在多代理网络上调查了在线凸的优化,在该网络上,每个代理只能访问随时间变化的全局损失函数的部分信息,因此需要邻近剂之间的本地信息交流。由现实中的许多应用激励,被考虑的本地损失函数不仅取决于其自己的决策变量,还取决于聚合变量,例如所有决策变量的平均值。为了解决此问题,提出了一个在线分布式梯度跟踪算法(O-DGT),并使用精确的梯度信息提出,这表明动态遗憾是由三个术语上限的上限:sublinear术语,路径变化项和梯度变化项。同时,还使用随机/嘈杂的梯度分析O-DGT算法,表明预期的动态遗憾与确切的梯度情况相同。据我们所知,本文是第一个在存在聚合变量的情况下研究在线凸优化的,该变量与没有聚合变量的传统方案相比具有新的特征。最后,提供了数值实验来证实所获得的理论结果。
This paper investigates distributed online convex optimization in the presence of an aggregative variable without any global/central coordinators over a multi-agent network, where each individual agent is only able to access partial information of time-varying global loss functions, thus requiring local information exchanges between neighboring agents. Motivated by many applications in reality, the considered local loss functions depend not only on their own decision variables, but also on an aggregative variable, such as the average of all decision variables. To handle this problem, an Online Distributed Gradient Tracking algorithm (O-DGT) is proposed with exact gradient information and it is shown that the dynamic regret is upper bounded by three terms: a sublinear term, a path variation term, and a gradient variation term. Meanwhile, the O-DGT algorithm is also analyzed with stochastic/noisy gradients, showing that the expected dynamic regret has the same upper bound as the exact gradient case. To our best knowledge, this paper is the first to study online convex optimization in the presence of an aggregative variable, which enjoys new characteristics in comparison with the conventional scenario without the aggregative variable. Finally, a numerical experiment is provided to corroborate the obtained theoretical results.