论文标题
一种全球收敛的方法,可加速使用直播模型超级还原加速大规模优化:用于形状优化的应用
A globally convergent method to accelerate large-scale optimization using on-the-fly model hyperreduction: application to shape optimization
论文作者
论文摘要
我们提出了一种数值方法,可以使用基于投影的基于投影的降低阶模型(经过高度还原(经验正交))和嵌入在保证全球转化的信任区域中,以加速了基于投影的降低阶模型,以有效地解决由大规模非线程方程式(包括离散的偏微分方程)控制的优化问题。提出的框架在优化问题解决方案期间构建了一个超级还原模型,这完全避免了离线训练阶段。这样可以确保沿优化轨迹收集所有快照信息,该信息避免了从未访问过的参数空间的偏远区域中浪费样品,并固有地避免了在高维参数空间中采样的维度的诅咒。在所提出的算法的每次迭代中,精确地构建了减少的基础和经验正交权重,以确保满足信任区域方法的全局收敛标准,从而确保全球收敛到原始(未偿还)问题的局部最小值。在两个流体形状优化问题上进行数值实验,以验证该方法的全局收敛性并证明其计算效率。相对于不利用降低模型的标准优化方法,相对于不利于降低模型的标准优化方法,超过18倍的加速度(占所有计算成本,甚至传统上被视为“离线”的成本,例如快照收集和数据压缩)。
We present a numerical method to efficiently solve optimization problems governed by large-scale nonlinear systems of equations, including discretized partial differential equations, using projection-based reduced-order models accelerated with hyperreduction (empirical quadrature) and embedded in a trust-region framework that guarantees global convergence. The proposed framework constructs a hyperreduced model on-the-fly during the solution of the optimization problem, which completely avoids an offline training phase. This ensures all snapshot information is collected along the optimization trajectory, which avoids wasting samples in remote regions of the parameters space that are never visited, and inherently avoids the curse of dimensionality of sampling in a high-dimensional parameter space. At each iteration of the proposed algorithm, a reduced basis and empirical quadrature weights are constructed precisely to ensure the global convergence criteria of the trust-region method are satisfied, ensuring global convergence to a local minimum of the original (unreduced) problem. Numerical experiments are performed on two fluid shape optimization problems to verify the global convergence of the method and demonstrate its computational efficiency; speedups over 18x (accounting for all computational cost, even cost that is traditionally considered "offline" such as snapshot collection and data compression) relative to standard optimization approaches that do not leverage model reduction are shown.