论文标题
学习最佳功率流:最差的神经网络保证
Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks
论文作者
论文摘要
本文首次使用学习以实现最佳功率流(OPF)问题为指导示例,以获得可证明的神经网络性能的框架,以获得可证明的最坏情况。神经网络有可能大大减少OPF解决方案的计算时间。但是,缺乏对他们最糟糕的表现的保证仍然是他们在实践中采用的主要障碍。这项工作旨在消除此障碍。我们制定混合构成线性程序,以获得与(i)最大约束违规相关的神经网络预测的最坏情况,(ii)预测和最佳决策变量之间的最大距离,以及(iii)最大的子次数。我们在一系列PGLIB-OPF网络上演示了我们的方法。我们表明,最差的案例保证可以比使用常规方法计算的经验下限大的数量级。更重要的是,我们表明,最坏的案例预测出现在训练输入域的边界上,并且我们证明了如何通过在更大的输入域上进行训练,而不是对其评估的域进行训练。
This paper introduces for the first time a framework to obtain provable worst-case guarantees for neural network performance, using learning for optimal power flow (OPF) problems as a guiding example. Neural networks have the potential to substantially reduce the computing time of OPF solutions. However, the lack of guarantees for their worst-case performance remains a major barrier for their adoption in practice. This work aims to remove this barrier. We formulate mixed-integer linear programs to obtain worst-case guarantees for neural network predictions related to (i) maximum constraint violations, (ii) maximum distances between predicted and optimal decision variables, and (iii) maximum sub-optimality. We demonstrate our methods on a range of PGLib-OPF networks up to 300 buses. We show that the worst-case guarantees can be up to one order of magnitude larger than the empirical lower bounds calculated with conventional methods. More importantly, we show that the worst-case predictions appear at the boundaries of the training input domain, and we demonstrate how we can systematically reduce the worst-case guarantees by training on a larger input domain than the domain they are evaluated on.