论文标题

结合可学习的膜时间常数,以增强尖峰神经网络的学习

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks

论文作者

Fang, Wei, Yu, Zhaofei, Chen, Yanqi, Masquelier, Timothee, Huang, Tiejun, Tian, Yonghong

论文摘要

由于时间信息处理能力,低功耗和高生物学合理性,尖峰神经网络(SNN)引起了巨大的研究兴趣。但是,SNN的有效和高性能学习算法的表述仍然具有挑战性。大多数现有的学习方法仅学习权重,并且需要对确定单个尖峰神经元动力学的膜相关参数进行手动调整。这些参数通常被选为所有神经元的相同,这限制了神经元的多样性,从而限制了所得SNN的表现。在本文中,我们从与膜相关的参数之间的观察中汲取了灵感,并且提出了一种训练算法,不仅可以学习突触权重,而且还可以学习SNNS的膜时间常数。我们表明,结合可学习的膜时间常数可以使网络对初始值不太敏感,并且可以加快学习速度。此外,我们重新评估了SNN中的汇总方法,发现最大功能不会导致大量信息丢失,并具有低计算成本和二进制兼容性的优势。我们评估了有关传统静态MNIST,时尚 - 纳斯特,CIFAR-10数据集和神经形态N-MNIST,CIFAR10-DVS,DVS128手势数据集的图像分类任务的建议方法。实验结果表明,使用较少的时间步骤,所提出的方法几乎在几乎所有数据集上都优于最先进的精度。我们的代码可从https://github.com/fangwei123456/parametric-leaky-integrate-and-fire-spiking-neuron获得。

Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility. However, the formulation of efficient and high-performance learning algorithms for SNNs is still challenging. Most existing learning methods learn weights only, and require manual tuning of the membrane-related parameters that determine the dynamics of a single spiking neuron. These parameters are typically chosen to be the same for all neurons, which limits the diversity of neurons and thus the expressiveness of the resulting SNNs. In this paper, we take inspiration from the observation that membrane-related parameters are different across brain regions, and propose a training algorithm that is capable of learning not only the synaptic weights but also the membrane time constants of SNNs. We show that incorporating learnable membrane time constants can make the network less sensitive to initial values and can speed up learning. In addition, we reevaluate the pooling methods in SNNs and find that max-pooling will not lead to significant information loss and have the advantage of low computation cost and binary compatibility. We evaluate the proposed method for image classification tasks on both traditional static MNIST, Fashion-MNIST, CIFAR-10 datasets, and neuromorphic N-MNIST, CIFAR10-DVS, DVS128 Gesture datasets. The experiment results show that the proposed method outperforms the state-of-the-art accuracy on nearly all datasets, using fewer time-steps. Our codes are available at https://github.com/fangwei123456/Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源