论文标题

分散的梯度方法具有时间变化的不协调的步骤:收敛分析和隐私设计

Decentralized Gradient Methods with Time-varying Uncoordinated Stepsizes: Convergence Analysis and Privacy Design

论文作者

Wang, Yongqiang, Nedic, Angelia

论文摘要

分散的优化使代理网络能够在没有中央协调器的情况下合作优化整体目标函数,并在控制,传感器网络,数据挖掘和机器人技术等域上增加了关注。但是,分散优化的代理商之间的信息共享也揭示了代理的信息,当涉及数据敏感时,这些信息是不希望的,甚至是不可接受的。本文提出了两种基于梯度的分散优化算法,这些算法可以保护参与代理的隐私而不会损害优化准确性或产生沉重的沟通/计算开销。这与基于差异隐私的方法有着明显的差异,这些方法必须为隐私提供优化的准确性,或基于加密的方法,这些方法会产生大量的通信和计算开销。两种算法都利用了一个明智地设计的混合矩阵和时间变化的不协调的步骤以实现隐私,一个使用缩小的步骤尺寸,而另一个则使用不合时宜的步骤。在这两种算法中,在与任何一个邻居进行互动时,参与代理人只需要在每种迭代中共享一个信息即可融合到一个确切的最佳解决方案,这与大多数基于基于梯度的算法的算法相反,每种算法都需要每个代理人共享两个信息(优化的梯度变量和一个辅助梯度捕获可变的可变量),这是不合时宜的)。此外,即使代理商共享的所有信息都可以访问对手,这两种算法都可以保证参与代理的隐私,在这种情况下,大多数现有的准确性维护隐私方法将无法保护隐私。仿真结果证实了所提出的算法的有效性。

Decentralized optimization enables a network of agents to cooperatively optimize an overall objective function without a central coordinator and is gaining increased attention in domains as diverse as control, sensor networks, data mining, and robotics. However, the information sharing among agents in decentralized optimization also discloses agents' information, which is undesirable or even unacceptable when involved data are sensitive. This paper proposes two gradient based decentralized optimization algorithms that can protect participating agents' privacy without compromising optimization accuracy or incurring heavy communication/computational overhead. This is in distinct difference from differential privacy based approaches which have to trade optimization accuracy for privacy, or encryption based approaches which incur heavy communication and computational overhead. Both algorithms leverage a judiciously designed mixing matrix and time-varying uncoordinated stepsizes to enable privacy, one using diminishing stepsizes while the other using non-diminishing stepsizes. In both algorithms, when interacting with any one of its neighbors, a participating agent only needs to share one message in each iteration to reach convergence to an exact optimal solution, which is in contrast to most gradient-tracking based algorithms requiring every agent to share two messages (an optimization variable and an auxiliary gradient-tracking variable) under non-diminishing stepsizes. Furthermore, both algorithms can guarantee the privacy of a participating agent even when all information shared by the agent are accessible to an adversary, a scenario in which most existing accuracy-maintaining privacy approaches will fail to protect privacy. Simulation results confirm the effectiveness of the proposed algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源