论文标题
多模式传感器融合框架稳健地识别缺失方式
A Multimodal Sensor Fusion Framework Robust to Missing Modalities for Person Recognition
论文作者
论文摘要
利用音频,可见摄像头和热相机的传感器特性,可以增强人识别的鲁棒性。现有的多模式识别框架主要是假设多模式数据始终可用的。在本文中,我们建议使用音频,可见和热摄像头提出一种新型的Trimodal传感器融合框架,该框架解决了缺失的模态问题。在框架中,提出了一个新颖的深层嵌入框架,称为AVTNET,以学习多种潜在嵌入。同样,一种新颖的损失函数,称为缺失的模态损失,在学习单个潜在嵌入时,基于三重态损失计算可能缺失模态。此外,使用多头注意变压器学习了使用三峰数据的联合潜在嵌入,该嵌入者将注意力重量分配给不同的方式。随后使用不同的潜在嵌入来训练深层神经网络。所提出的框架在说话面部数据集上进行了验证。与基线算法的比较分析表明,所提出的框架在考虑缺失模式的同时显着提高了人的识别精度。
Utilizing the sensor characteristics of the audio, visible camera, and thermal camera, the robustness of person recognition can be enhanced. Existing multimodal person recognition frameworks are primarily formulated assuming that multimodal data is always available. In this paper, we propose a novel trimodal sensor fusion framework using the audio, visible, and thermal camera, which addresses the missing modality problem. In the framework, a novel deep latent embedding framework, termed the AVTNet, is proposed to learn multiple latent embeddings. Also, a novel loss function, termed missing modality loss, accounts for possible missing modalities based on the triplet loss calculation while learning the individual latent embeddings. Additionally, a joint latent embedding utilizing the trimodal data is learnt using the multi-head attention transformer, which assigns attention weights to the different modalities. The different latent embeddings are subsequently used to train a deep neural network. The proposed framework is validated on the Speaking Faces dataset. A comparative analysis with baseline algorithms shows that the proposed framework significantly increases the person recognition accuracy while accounting for missing modalities.