论文标题

在人工神经网络训练中吹来的现象用于梯度下降优化方法

Blow up phenomena for gradient descent optimization methods in the training of artificial neural networks

论文作者

Gallon, Davide, Jentzen, Arnulf, Lindner, Felix

论文摘要

在本文中,我们研究了训练人工神经网络(ANN)中梯度下降优化方法的爆炸现象。我们的理论分析集中在输入层上有一个神经元的浅AN,输出层上的一个神经元和一个隐藏层。对于隐藏层上具有RELU激活和至少两个神经元的ANN,我们确定了目标函数的存在,因此,对于相关风险函数的临界点的风险值存在下限,严格大于风险函数图像的最大值。这使我们能够证明每个梯度流轨迹的初始风险小于此下限偏差。此外,我们分析和比较了各种流行类型的激活功能,就ANN培训培训和梯度下降轨迹的差异以及有关风险功能的全球最低点的紧密相关问题而言。

In this article we investigate blow up phenomena for gradient descent optimization methods in the training of artificial neural networks (ANNs). Our theoretical analysis is focused on shallow ANNs with one neuron on the input layer, one neuron on the output layer, and one hidden layer. For ANNs with ReLU activation and at least two neurons on the hidden layer we establish the existence of a target function such that there exists a lower bound for the risk values of the critical points of the associated risk function which is strictly greater than the infimum of the image of the risk function. This allows us to demonstrate that every gradient flow trajectory with an initial risk smaller than this lower bound diverges. Furthermore, we analyze and compare various popular types of activation functions with regard to the divergence of gradient flow trajectories and gradient descent trajectories in the training of ANNs and with regard to the closely related question concerning the existence of global minimum points of the risk function.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源