论文标题
重新访问Gaussian神经元,用于在线聚类,群集数量未知
Revisiting Gaussian Neurons for Online Clustering with Unknown Number of Clusters
论文作者
论文摘要
尽管人工神经网络最近取得了成功,但可能需要采用更合理的学习方法来解决经过反向传播训练的模型的弱点,例如灾难性的遗忘和对抗性攻击。尽管这些弱点不是专门解决的,但提出了一项新的本地学习规则,该规则对在线聚类进行了对要找到的群集数量的上限,而不是固定的群集计数。与其使用正交的重量或输出激活约束,而是通过相互排斥的侧向高斯神经元来实现激活稀疏性,以确保多个神经元中心不能占据输入域中的相同位置。还提出了一种更新方法,用于调整高斯神经元的宽度,如果数据样本可以通过均值和方差表示。将算法应用于MNIST和CIFAR-10数据集上,以创建过滤器捕获各种大小的像素贴片的输入模式。实验结果表明,在大量训练样本中,学到的参数的稳定性。
Despite the recent success of artificial neural networks, more biologically plausible learning methods may be needed to resolve the weaknesses of backpropagation trained models such as catastrophic forgetting and adversarial attacks. Although these weaknesses are not specifically addressed, a novel local learning rule is presented that performs online clustering with an upper limit on the number of clusters to be found rather than a fixed cluster count. Instead of using orthogonal weight or output activation constraints, activation sparsity is achieved by mutual repulsion of lateral Gaussian neurons ensuring that multiple neuron centers cannot occupy the same location in the input domain. An update method is also presented for adjusting the widths of the Gaussian neurons in cases where the data samples can be represented by means and variances. The algorithms were applied on the MNIST and CIFAR-10 datasets to create filters capturing the input patterns of pixel patches of various sizes. The experimental results demonstrate stability in the learned parameters across a large number of training samples.