论文标题

端到端的学习为实时二次优化提供热情启动

End-to-End Learning to Warm-Start for Real-Time Quadratic Optimization

论文作者

Sambharya, Rajiv, Hall, Georgina, Amos, Brandon, Stellato, Bartolomeo

论文摘要

一阶方法被广泛用于在实时应用程序中求解凸二次程序(QPS),因为它们的每卷成本较低。但是,它们可能会遭受缓慢的收敛到准确的解决方案。在本文中,我们提出了一个框架,该框架在实时应用程序中,在一个参数QP的家族中学习了一种流行的一阶方法Douglas-Rachford(DR)分裂的有效温暖启动。该框架由两个模块组成:一个前馈神经网络块,该模块将其作为输入QP的参数并输出温暖的启动,以及一个执行固定数量的DR从该温暖启动的DR迭代的块,并输出候选解决方案。我们框架的一个关键特征是它通过DR迭代进行区分时可以进行端到端学习的能力。为了说明我们方法的有效性,我们提供了概括(基于Rademacher的复杂性),这些界限随着训练问题的数量和迭代次数的改善。我们进一步将方法应用于三个实时应用程序,并观察到,通过学习良好的温暖启动,我们能够显着减少获得高质量解决方案所需的迭代次数。

First-order methods are widely used to solve convex quadratic programs (QPs) in real-time applications because of their low per-iteration cost. However, they can suffer from slow convergence to accurate solutions. In this paper, we present a framework which learns an effective warm-start for a popular first-order method in real-time applications, Douglas-Rachford (DR) splitting, across a family of parametric QPs. This framework consists of two modules: a feedforward neural network block, which takes as input the parameters of the QP and outputs a warm-start, and a block which performs a fixed number of iterations of DR splitting from this warm-start and outputs a candidate solution. A key feature of our framework is its ability to do end-to-end learning as we differentiate through the DR iterations. To illustrate the effectiveness of our method, we provide generalization bounds (based on Rademacher complexity) that improve with the number of training problems and number of iterations simultaneously. We further apply our method to three real-time applications and observe that, by learning good warm-starts, we are able to significantly reduce the number of iterations required to obtain high-quality solutions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源