论文标题
致力于普遍表示学习深度识别
Towards Universal Representation Learning for Deep Face Recognition
论文作者
论文摘要
认识到狂野的面孔非常困难,因为它们出现了各种变体。传统方法要么具有来自目标域的特定注释变化数据的训练,要么通过引入未标记的目标变化数据以适应训练数据。取而代之的是,我们提出了一个通用表示学习框架,该框架可以处理给定培训数据中看不见的较大变化,而不会利用目标域知识。首先,我们将训练数据与一些语义上有意义的变化合成,例如低分辨率,遮挡和头部姿势。但是,直接喂食增强数据进行培训不会很好地收敛,因为新引入的样本主要是很难的例子。我们建议将嵌入到多个子安装的特征分开,并将每个子插入的不同置信值关联以平滑训练程序。通过将它们的不同分区的变异分类损失和对抗性损失的正规变异分类损失和变异的差异来进一步与之相关。实验表明,我们的方法在LFW和Megaface等一般面部识别数据集上达到了最高的性能,而在TinyFace和IJB-S等极端基准测试上则更好。
Recognizing wild faces is extremely hard as they appear with all kinds of variations. Traditional methods either train with specifically annotated variation data from target domains, or by introducing unlabeled target variation data to adapt from the training data. Instead, we propose a universal representation learning framework that can deal with larger variation unseen in the given training data without leveraging target domain knowledge. We firstly synthesize training data alongside some semantically meaningful variations, such as low resolution, occlusion and head pose. However, directly feeding the augmented data for training will not converge well as the newly introduced samples are mostly hard examples. We propose to split the feature embedding into multiple sub-embeddings, and associate different confidence values for each sub-embedding to smooth the training procedure. The sub-embeddings are further decorrelated by regularizing variation classification loss and variation adversarial loss on different partitions of them. Experiments show that our method achieves top performance on general face recognition datasets such as LFW and MegaFace, while significantly better on extreme benchmarks such as TinyFace and IJB-S.