论文标题

水库计算神经网络中的稀疏性

Sparsity in Reservoir Computing Neural Networks

论文作者

Gallicchio, Claudio

论文摘要

水库计算(RC)是一种众所周知的策略,用于设计以惊人的培训效率来设计复发性神经网络。 RC的关键方面是正确实例化了作为系统动态内存的隐藏复发层。在这方面,常见的配方是创建一个随机和稀疏连接的复发性神经元的池。尽管文献中已经对RC系统设计中的稀疏性方面进行了争议,但如今,它主要被理解为提高计算效率并利用稀疏矩阵操作的一种方式。在本文中,我们从经验上研究了稀疏性在RC网络设计中的作用,这是开发时间表示的丰富性的角度。我们分析了复发连接中的两种稀疏性,以及从输入到储层的连接中。我们的结果指出,稀疏性,尤其是在输入 - 塞维尔连接中,在开发内部时间表示方面具有重要作用,这些内部时间表示具有更长的短期记忆,对过去的输入和更高的维度。

Reservoir Computing (RC) is a well-known strategy for designing Recurrent Neural Networks featured by striking efficiency of training. The crucial aspect of RC is to properly instantiate the hidden recurrent layer that serves as dynamical memory to the system. In this respect, the common recipe is to create a pool of randomly and sparsely connected recurrent neurons. While the aspect of sparsity in the design of RC systems has been debated in the literature, it is nowadays understood mainly as a way to enhance the efficiency of computation, exploiting sparse matrix operations. In this paper, we empirically investigate the role of sparsity in RC network design under the perspective of the richness of the developed temporal representations. We analyze both sparsity in the recurrent connections, and in the connections from the input to the reservoir. Our results point out that sparsity, in particular in input-reservoir connections, has a major role in developing internal temporal representations that have a longer short-term memory of past inputs and a higher dimension.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源