论文标题
随机坐标最小化,渐进式精度进行随机凸优化
Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization
论文作者
论文摘要
开发了基于迭代坐标最小化(CM)的框架进行随机凸优化。鉴于由于目标函数的未知随机性,确切的坐标最小化是不可能的,因此所提出的优化算法的关键是对每种迭代中最小化精度的最佳控制。我们建立了最佳的精度控制和最佳的订单最佳遗憾性能,以强烈凸出和不可分割的功能。一个有趣的发现是,跨迭代的最佳精度进展与使用的低维CM例程无关,这表明将低维优化程序扩展到高维问题的一般框架。所提出的算法可以在线实施,并继承CM的可扩展性和并行性属性,以进行大规模优化。与坐标梯度下降的替代方法相比,仅需要额定消息交换的额定顺序,它也可以很好地提供分布式计算。
A framework based on iterative coordinate minimization (CM) is developed for stochastic convex optimization. Given that exact coordinate minimization is impossible due to the unknown stochastic nature of the objective function, the crux of the proposed optimization algorithm is an optimal control of the minimization precision in each iteration. We establish the optimal precision control and the resulting order-optimal regret performance for strongly convex and separably nonsmooth functions. An interesting finding is that the optimal progression of precision across iterations is independent of the low-dimensional CM routine employed, suggesting a general framework for extending low-dimensional optimization routines to high-dimensional problems. The proposed algorithm is amenable to online implementation and inherits the scalability and parallelizability properties of CM for large-scale optimization. Requiring only a sublinear order of message exchanges, it also lends itself well to distributed computing as compared with the alternative approach of coordinate gradient descent.