论文标题
凸优化中的基于拉格朗日的方法:具有厄贡收敛速率的预测校正框架
Lagrangian-based methods in convex optimization: prediction-correction frameworks with ergodic convergence rates
论文作者
论文摘要
我们研究了经典的基于拉格朗日的方法的收敛速率及其用于解决凸优化问题的变体,并具有平等约束。我们提出了一个广义的预测校正框架,以建立$ o(1/k^2)$ ergodic收敛率。在强烈的凸出假设下,基于提出的预测校正框架,提出了一些具有$ O(1/k^2)$ egodic收敛速率的基于拉格朗日的方法,例如,以无限端术语(不确定的近端术语)的增强拉格朗日方法,较大步骤(admm)的交替方向(AMPSM} $ qu $ s $ sq y s s y s $ s s y s $ s y s $ s;与无限近端项和多块ADMM类型方法线性化的ADMM(在一个替代假设下,一个块的梯度是Lipschitz连续的)。
We study the convergence rates of the classical Lagrangian-based methods and their variants for solving convex optimization problems with equality constraints. We present a generalized prediction-correction framework to establish $O(1/K^2)$ ergodic convergence rates. Under the strongly convex assumption, based on the presented prediction-correction framework, some Lagrangian-based methods with $O(1/K^2)$ ergodic convergence rates are presented, such as the augmented Lagrangian method with the indefinite proximal term, the alternating direction method of multipliers (ADMM) with a larger step size up to $(1+\sqrt{5})/2$, the linearized ADMM with the indefinite proximal term, and the multi-block ADMM type method (under an alternative assumption that the gradient of one block is Lipschitz continuous).