论文标题

线性约束强烈凸出问题的惩罚方法的收敛率

Convergence Rate of a Penalty Method for Strongly Convex Problems with Linear Constraints

论文作者

Nedich, Angelia, Tatarenko, Tatiana

论文摘要

我们考虑一个优化问题,具有强烈的凸目标和线性不平等的约束。为了能够处理大量限制,我们提供了对问题的罚款重新重新制定。作为惩罚功能,我们使用单方面的Huber损失的版本。这些功能的平滑度属性使我们能够选择随时间变化的惩罚参数,以使带有速度尺寸减小的逐步尺寸的增量过程收敛到精确解决方案,并带有速率$ o(1/{\ sqrt k})$。据我们所知,我们介绍了基于惩罚的梯度方法的收敛率的第一个结果,在该方法中,惩罚参数随时间而变化。

We consider an optimization problem with strongly convex objective and linear inequalities constraints. To be able to deal with a large number of constraints we provide a penalty reformulation of the problem. As penalty functions we use a version of the one-sided Huber losses. The smoothness properties of these functions allow us to choose time-varying penalty parameters in such a way that the incremental procedure with the diminishing step-size converges to the exact solution with the rate $O(1/{\sqrt k})$. To the best of our knowledge, we present the first result on the convergence rate for the penalty-based gradient method, in which the penalty parameters vary with time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源