论文标题
从relu网络到尖峰神经网络的精确映射
An Exact Mapping From ReLU Networks to Spiking Neural Networks
论文作者
论文摘要
深尖峰神经网络(SNN)提供了低功耗人工智能的希望。但是,培训从头开始的深入SNN或将深人造神经网络转换为SNN而不会失去性能是一个挑战。在这里,我们提出了一个从具有整流线性单元(relus)网络的精确映射到每个神经元恰好发射一个尖峰的SNN的映射。对于我们的建设性证明,我们假设具有或不具有卷积层的任意多层RELU网络,在某些训练集中对批处理归一化和最大池层进行了培训。此外,我们假设我们可以访问训练过程中使用的输入数据的代表性示例以及训练有素的Relu网络的确切参数(权重和偏见)。从深度relu网络到SNN的映射导致CIFAR10,CIFAR100和类似Imagenet的数据集的精度下降量为零,占365和PASS。更普遍地说,我们的工作表明,任意的深层relu网络可以用节能的单尖神经网络代替,而不会丧失性能。
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.