论文标题
自适应学习的自适应记忆网络,用于无监督异常检测
Adaptive Memory Networks with Self-supervised Learning for Unsupervised Anomaly Detection
论文作者
论文摘要
无监督的异常检测旨在仅通过对正常数据进行训练来构建模型,以有效地检测到看不见的异常。尽管以前基于重建的方法取得了成果,但由于两个关键的挑战,它们的概括能力受到限制。首先,训练数据集仅包含正常模式,从而限制了模型概括能力。其次,现有模型学到的特征表示通常缺乏代表性,从而阻碍了保持正常模式多样性的能力。在本文中,我们提出了一种具有自我监督的学习(AMSL)的新型方法,称为自适应记忆网络(AMSL),以应对这些挑战并增强无监督异常检测的概括能力。基于卷积自动编码器结构,AMSL结合了一个自制的学习模块,以学习一般的正常模式和自适应内存融合模块,以学习丰富的特征表示。四个公共多元时间序列数据集的实验表明,与其他最新方法相比,AMSL显着提高了性能。具体而言,在最大的帽子睡眠阶段检测数据集中,AMSL的表现优于\ textbf {4} \%+的第二好基准,同时均以精度和F1分数。除了增强的概括能力外,AMSL在输入噪声方面也更强大。
Unsupervised anomaly detection aims to build models to effectively detect unseen anomalies by only training on the normal data. Although previous reconstruction-based methods have made fruitful progress, their generalization ability is limited due to two critical challenges. First, the training dataset only contains normal patterns, which limits the model generalization ability. Second, the feature representations learned by existing models often lack representativeness which hampers the ability to preserve the diversity of normal patterns. In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection. Based on the convolutional autoencoder structure, AMSL incorporates a self-supervised learning module to learn general normal patterns and an adaptive memory fusion module to learn rich feature representations. Experiments on four public multivariate time series datasets demonstrate that AMSL significantly improves the performance compared to other state-of-the-art methods. Specifically, on the largest CAP sleep stage detection dataset with 900 million samples, AMSL outperforms the second-best baseline by \textbf{4}\%+ in both accuracy and F1 score. Apart from the enhanced generalization ability, AMSL is also more robust against input noise.