论文标题

更多信息监督概率的深面嵌入学习

More Information Supervised Probabilistic Deep Face Embedding Learning

论文作者

Huang, Ying, Qiu, Shangfeng, Zhang, Wenwei, Luo, Xianghui, Wang, Jinzhuo

论文摘要

使用基于保证金比较损失的研究表明,惩罚面部特征及其相应班级中心之间的距离的有效性。尽管它们的流行和出色的表现,但他们并没有明确鼓励对开放式识别问题的通用嵌入学习。在本文中,我们分析了概率视图中基于保证金的软磁损失。从这个角度来看,我们提出了两个一般原则:1)单调降低和2)边缘概率罚款,用于设计新的保证金损失函数。与单个比较指标优化的方法不同,我们提供了一种新的视角,将开放式面部识别视为信息传输问题。并且通过更多干净的信息获得了面部嵌入的概括能力。提出了一种称为Linear-Auto-TS-编码器(LATSE)的自动编码器架构来证实这一发现。对几个基准测试的广泛实验表明,LATSE有助于面对嵌入以获得更多的概括能力,并且在Megaface测试中,通过开放式培训数据集将单个模型性能提高到超过$ 99 \%$。

Researches using margin based comparison loss demonstrate the effectiveness of penalizing the distance between face feature and their corresponding class centers. Despite their popularity and excellent performance, they do not explicitly encourage the generic embedding learning for an open set recognition problem. In this paper, we analyse margin based softmax loss in probability view. With this perspective, we propose two general principles: 1) monotonic decreasing and 2) margin probability penalty, for designing new margin loss functions. Unlike methods optimized with single comparison metric, we provide a new perspective to treat open set face recognition as a problem of information transmission. And the generalization capability for face embedding is gained with more clean information. An auto-encoder architecture called Linear-Auto-TS-Encoder(LATSE) is proposed to corroborate this finding. Extensive experiments on several benchmarks demonstrate that LATSE help face embedding to gain more generalization capability and it boosted the single model performance with open training dataset to more than $99\%$ on MegaFace test.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源