论文标题
具有联合固有参数和突触体重训练的跳过连接的自我峰值神经网络
Skip-Connected Self-Recurrent Spiking Neural Networks with Joint Intrinsic Parameter and Synaptic Weight Training
论文作者
论文摘要
作为一类重要的尖峰神经网络(SNN),经常性的尖峰神经网络(RSNN)具有很大的计算能力,并且已广泛用于处理诸如音频和文本之类的顺序数据。但是,大多数RSNN都遇到了两个问题。 1。由于缺乏建筑指导,通常会采用随机反复连接性,这不能保证良好的性能。 2。培训RSNN通常具有挑战性,瓶颈可实现的模型准确性。为了解决这些问题,我们提出了一种新型的RSNN,称为Skip相互连接的自我SNNS(SCSR-SNNS)。 SCSR-SNN中的复发是通过刻板印象引入的,通过在实现本地记忆的尖峰神经元中添加自我连接。网络动力学通过非粘合层之间的跳过连接丰富。 SCSR-SNN通过简化的自我和跳过连接构建,能够实现与更复杂的RSNN相似的经常性行为,而由于网络的大多数馈电本质,可以更简单地计算出错误梯度。此外,我们提出了一种称为反向传播的内在可塑性(BIP)的新的反向传播(BP)方法,以通过训练固有模型参数进一步提高SCSR-SNN的性能。与根据神经元活动调整神经元内固有参数的标准固有可塑性规则不同,提出的BIP方法基于基于突触重量训练的良好定义全局损耗功能的反向传播误差梯度来优化固有参数。基于具有挑战性的语音和神经形态语音数据集,包括Ti46-Alpha,Ti46-数字和N-TIDIDID,与其他最终的BP方法相比,拟议的SCSR-SNN可以提高高达2.55%的RSNN。
As an important class of spiking neural networks (SNNs), recurrent spiking neural networks (RSNNs) possess great computational power and have been widely used for processing sequential data like audio and text. However, most RSNNs suffer from two problems. 1. Due to a lack of architectural guidance, random recurrent connectivity is often adopted, which does not guarantee good performance. 2. Training of RSNNs is in general challenging, bottlenecking achievable model accuracy. To address these problems, we propose a new type of RSNNs called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs). Recurrence in ScSr-SNNs is introduced in a stereotyped manner by adding self-recurrent connections to spiking neurons, which implements local memory. The network dynamics is enriched by skip connections between nonadjacent layers. Constructed by simplified self-recurrent and skip connections, ScSr-SNNs are able to realize recurrent behaviors similar to those of more complex RSNNs while the error gradients can be more straightforwardly calculated due to the mostly feedforward nature of the network. Moreover, we propose a new backpropagation (BP) method called backpropagated intrinsic plasticity (BIP) to further boost the performance of ScSr-SNNs by training intrinsic model parameters. Unlike standard intrinsic plasticity rules that adjust the neuron's intrinsic parameters according to neuronal activity, the proposed BIP methods optimize intrinsic parameters based on the backpropagated error gradient of a well-defined global loss function in addition to synaptic weight training. Based upon challenging speech and neuromorphic speech datasets including TI46-Alpha, TI46-Digits, and N-TIDIGITS, the proposed ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.