论文标题

新学习算法的融合

Convergence of a New Learning Algorithm

论文作者

Lin, Feng

论文摘要

Brandt和Lin针对神经网络提出的一种新的学习算法[1] [1],[2]在数学上与传统的背部传播学习算法相同,但在反向传播算法中具有多个优势,包括无反馈网络的实现和生物学合理性。在本文中,我们研究了新算法的收敛性。得出了算法收敛的必要条件。提出了一项收敛度量,以测量新算法的收敛速率。进行了模拟研究,以研究算法相对于神经元数量,连接距离,连接密度,兴奋性/抑制突触的比率,膜电位和突触强度的收敛性。

A new learning algorithm proposed by Brandt and Lin for neural network [1], [2] has been shown to be mathematically equivalent to the conventional back-propagation learning algorithm, but has several advantages over the backpropagation algorithm, including feedback-network-free implementation and biological plausibility. In this paper, we investigate the convergence of the new algorithm. A necessary and sufficient condition for the algorithm to converge is derived. A convergence measure is proposed to measure the convergence rate of the new algorithm. Simulation studies are conducted to investigate the convergence of the algorithm with respect to the number of neurons, the connection distance, the connection density, the ratio of excitatory/inhibitory synapses, the membrane potentials, and the synapse strengths.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源