论文标题
基于相对注意的一级对抗自动编码器,用于连续身份验证智能手机用户
Relative Attention-based One-Class Adversarial Autoencoder for Continuous Authentication of Smartphone Users
论文作者
论文摘要
基于行为生物识别的连续身份验证是一种有希望的身份验证方案,它使用内置传感器记录的行为生物识别技术在整个会话过程中对智能手机用户进行身份验证。但是,当前的连续身份验证方法遭受了一定的局限性:1)需要训练持续身份验证模型的冒名顶替者的行为生物识别技术。由于未知来自不同攻击者的否定样本的分布是一个很难解决的问题。 2)大多数基于学习的连续身份验证方法需要训练两种模型以提高身份验证性能。深度学习模型,用于深度特征提取,以及用于分类的基于机器学习的分类器; 3)捕获用户行为模式的能力较弱会导致身份验证性能差。为了解决这些问题,我们提出了一个基于相对注意的一级对抗自动编码器,以连续身份验证智能手机用户。首先,我们提出了一个单级对抗性自动编码器,以学习合法用户行为模式的潜在表示,该表示仅通过合法的智能手机用户的行为生物识别技术进行培训。其次,我们提出了相对注意力层,以捕获用户行为模式的更丰富的上下文语义表示,该语言模式使用卷积投影而不是线性投影来修改标准的自我发项机制以执行注意力图。实验结果表明,我们可以在三个公共数据集上实现1.05%EER,1.09%EER和1.08%EER的卓越性能。
Behavioral biometrics-based continuous authentication is a promising authentication scheme, which uses behavioral biometrics recorded by built-in sensors to authenticate smartphone users throughout the session. However, current continuous authentication methods suffer some limitations: 1) behavioral biometrics from impostors are needed to train continuous authentication models. Since the distribution of negative samples from diverse attackers are unknown, it is a difficult problem to solve in real-world scenarios; 2) most deep learning-based continuous authentication methods need to train two models to improve authentication performance. A deep learning model for deep feature extraction, and a machine learning-based classifier for classification; 3) weak capability of capturing users' behavioral patterns leads to poor authentication performance. To solve these issues, we propose a relative attention-based one-class adversarial autoencoder for continuous authentication of smartphone users. First, we propose a one-class adversarial autoencoder to learn latent representations of legitimate users' behavioral patterns, which is trained only with legitimate smartphone users' behavioral biometrics. Second, we present the relative attention layer to capture richer contextual semantic representation of users' behavioral patterns, which modifies the standard self-attention mechanism using convolution projection instead of linear projection to perform the attention maps. Experimental results demonstrate that we can achieve superior performance of 1.05% EER, 1.09% EER, and 1.08% EER with a high authentication frequency (0.7s) on three public datasets.