论文标题

关于神经形态底物的脑启发学习

Brain-Inspired Learning on Neuromorphic Substrates

论文作者

Zenke, Friedemann, Neftci, Emre O.

论文摘要

神经形态硬件努力效仿大脑样神经网络,因此有望在时间数据流上进行可扩展的低功率信息处理。但是,要解决现实世界中的问题,需要培训这些网络。但是,对神经形态底物的培训由于离线特征和基于梯度的学习算法所需的非本地计算而造成了重大挑战。本文提供了一个数学框架,用于设计神经形态底物的实用在线学习算法。具体而言,我们显示了实时复发学习(RTRL)之间的直接连接,这是一种用于计算传统复发神经网络(RNN)(RNN)的在线算法,以及用于培训尖峰神经网络(SNNS)的生物学上合理的学习规则。此外,我们激励基于基于障碍的雅各布人的稀疏近似,从而降低了该算法的计算复杂性,降低了非本地信息要求,并从经验上导致了良好的学习绩效,从而提高了其对神经形态底物的适用性。总而言之,我们的框架弥合了深度学习中突触可塑性与基于梯度的方法之间的差距,并为未来神经形态硬件系统的强大信息处理奠定了基础。

Neuromorphic hardware strives to emulate brain-like neural networks and thus holds the promise for scalable, low-power information processing on temporal data streams. Yet, to solve real-world problems, these networks need to be trained. However, training on neuromorphic substrates creates significant challenges due to the offline character and the required non-local computations of gradient-based learning algorithms. This article provides a mathematical framework for the design of practical online learning algorithms for neuromorphic substrates. Specifically, we show a direct connection between Real-Time Recurrent Learning (RTRL), an online algorithm for computing gradients in conventional Recurrent Neural Networks (RNNs), and biologically plausible learning rules for training Spiking Neural Networks (SNNs). Further, we motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity, diminishes the non-local information requirements, and empirically leads to good learning performance, thereby improving its applicability to neuromorphic substrates. In summary, our framework bridges the gap between synaptic plasticity and gradient-based approaches from deep learning and lays the foundations for powerful information processing on future neuromorphic hardware systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源