论文标题
基于最佳的基于双线性耦合的鞍点优化
Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization
论文作者
论文摘要
我们考虑平滑的凸孔concave双线性耦合的鞍点问题,$ \ min _ {\ MathBf {x}}} \ max _ {\ mathbf {y}} 〜f(\ MathBf {x}} 〜f(\ mathbf {x}) g(\ mathbf {y})$,其中一个人可以使用$ f $,$ g $的随机一阶牙齿以及双线耦合函数$ h $。在标准随机外部分析的变异不等式的基础上,我们提出了一种随机\ emph {加速梯度 - extragradient(ag-eg)}下降 - 降低的算法,该算法在一般随机设置中结合了外部和Nesterov的加速度。该算法利用计划重新启动来承认一个细粒度的非沉积收敛速率,该算法与\ citet {ibrahim2020linear}和\ citet {zhang2021lower}相匹配,在相应的设置中,\ citet {zhang2021lower}在其相应的设置中,加上一个额外的统计误差,以及一个不断的稳定性噪声,这些稳定性均与常数的预先噪声相匹配,以置于界限误差。这是在鞍点优化中实现这种相对成熟的最佳表征的第一个结果。
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $\min_{\mathbf{x}}\max_{\mathbf{y}}~F(\mathbf{x}) + H(\mathbf{x},\mathbf{y}) - G(\mathbf{y})$, where one has access to stochastic first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$. Building upon standard stochastic extragradient analysis for variational inequalities, we present a stochastic \emph{accelerated gradient-extragradient (AG-EG)} descent-ascent algorithm that combines extragradient and Nesterov's acceleration in general stochastic settings. This algorithm leverages scheduled restarting to admit a fine-grained nonasymptotic convergence rate that matches known lower bounds by both \citet{ibrahim2020linear} and \citet{zhang2021lower} in their corresponding settings, plus an additional statistical error term for bounded stochastic noise that is optimal up to a constant prefactor. This is the first result that achieves such a relatively mature characterization of optimality in saddle-point optimization.