论文标题
鲁棒盖:用于图像分类的转换刺激胶囊网络
Robustcaps: a transformation-robust capsule network for image classification
论文作者
论文摘要
培训数据以及测试数据的几何转换对将深度神经网络用于基于视觉的学习任务的挑战。为了解决这个问题,我们提出了一个深层的神经网络模型,该模型表现出了转化的理想特性。我们的模型称为RobustCaps,在改进的胶囊网络模型中使用群 - 等级卷积。 RobustCaps在其路由算法中使用全局上下文范围的过程来学习图像数据中的转换不变零件 - 整体关系。对这种关系的学习使我们的模型能够在转换 - 固定分类任务上胜过胶囊和卷积神经网络基线。具体而言,当将这些数据集中的图像受到训练和测试时间旋转和翻译时,RobustCaps在CIFAR-10,FashionMnist和CIFAR-100上实现了最新精度。
Geometric transformations of the training data as well as the test data present challenges to the use of deep neural networks to vision-based learning tasks. In order to address this issue, we present a deep neural network model that exhibits the desirable property of transformation-robustness. Our model, termed RobustCaps, uses group-equivariant convolutions in an improved capsule network model. RobustCaps uses a global context-normalised procedure in its routing algorithm to learn transformation-invariant part-whole relationships within image data. This learning of such relationships allows our model to outperform both capsule and convolutional neural network baselines on transformation-robust classification tasks. Specifically, RobustCaps achieves state-of-the-art accuracies on CIFAR-10, FashionMNIST, and CIFAR-100 when the images in these datasets are subjected to train and test-time rotations and translations.