论文标题
学习有限记忆的broyden-fletcher-goldfarb-shanno算法的阶级策略
Learning the Step-size Policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm
论文作者
论文摘要
我们考虑了如何学习有限内存的Broyden-Fletcher-Goldfarb-Shanno(L-BFGS)算法的阶级策略的问题。这是一种有限的计算存储器准Newton方法,用于确定性不受约束的优化,但目前在大规模问题中避免使用,以要求在每次迭代时提供阶梯尺寸。 L-BFGS的步长选择的现有方法使用设计参数的启发式调整以及目标函数的大量重新评估,并找到适当的步长。我们提出了一个神经网络体系结构,其中包含当前迭代的本地信息作为输入。从类似优化问题的数据中学到的步长策略,避免对目标函数进行其他评估,并确保输出步骤保留在预定义的间隔内。相应的训练程序是使用时间算法的反向传播的随机优化问题。在为手写数字和CIFAR-10的MNIST数据库的分类器培训时,评估了所提出方法的性能。结果表明,所提出的算法的表现优于启发式调谐优化器,例如ADAM,RMSPROP,具有回溯线搜索的L-BFG和具有恒定步长的L-BFGS。数值结果还表明,在进行了几个其他培训步骤之后,可以将学习的政策用作训练新政策的温暖启动,以强调其在多个大规模优化问题中的潜在用途。
We consider the problem of how to learn a step-size policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. This is a limited computational memory quasi-Newton method widely used for deterministic unconstrained optimization but currently avoided in large-scale problems for requiring step sizes to be provided at each iteration. Existing methodologies for the step size selection for L-BFGS use heuristic tuning of design parameters and massive re-evaluations of the objective function and gradient to find appropriate step-lengths. We propose a neural network architecture with local information of the current iterate as the input. The step-length policy is learned from data of similar optimization problems, avoids additional evaluations of the objective function, and guarantees that the output step remains inside a pre-defined interval. The corresponding training procedure is formulated as a stochastic optimization problem using the backpropagation through time algorithm. The performance of the proposed method is evaluated on the training of classifiers for the MNIST database for handwritten digits and for CIFAR-10. The results show that the proposed algorithm outperforms heuristically tuned optimizers such as ADAM, RMSprop, L-BFGS with a backtracking line search, and L-BFGS with a constant step size. The numerical results also show that a learned policy can be used as a warm-start to train new policies for different problems after a few additional training steps, highlighting its potential use in multiple large-scale optimization problems.