论文标题
MAXDROPOUTV2:在卷积神经网络中脱离神经元的改进方法
MaxDropoutV2: An Improved Method to Drop out Neurons in Convolutional Neural Networks
论文作者
论文摘要
在过去的十年中,指数数据增长提供了基于机器学习的算法的能力,并使其在日常生活活动中的使用。此外,由于深度学习技术的出现(即,最终都以更复杂的模型中的简单体系结构)的出现,可以部分解释这种改进。尽管这两个因素都会产生出色的结果,但它们还对学习过程构成了缺点,因为训练复杂模型表示昂贵的任务,并且结果容易过度培训培训数据。最近提出了一种称为MaxDropout的监督正规化技术来解决后者,提供了有关传统正规化方法的几项改进。在本文中,我们介绍其改进的版本,称为MaxDropoutv2。考虑到两个公共数据集的结果表明,该模型的性能比标准版本更快,在大多数情况下,模型提供了更准确的结果。
In the last decade, exponential data growth supplied the machine learning-based algorithms' capacity and enabled their usage in daily life activities. Additionally, such an improvement is partially explained due to the advent of deep learning techniques, i.e., stacks of simple architectures that end up in more complex models. Although both factors produce outstanding results, they also pose drawbacks regarding the learning process since training complex models denotes an expensive task and results are prone to overfit the training data. A supervised regularization technique called MaxDropout was recently proposed to tackle the latter, providing several improvements concerning traditional regularization approaches. In this paper, we present its improved version called MaxDropoutV2. Results considering two public datasets show that the model performs faster than the standard version and, in most cases, provides more accurate results.