论文标题
语音情绪识别基于共同注意的多级声学信息
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information
论文作者
论文摘要
语音情感识别(SER)旨在帮助机器仅从音频信息中了解人类的主观情感。但是,提取和利用全面的深入音频信息仍然是一项具有挑战性的任务。在本文中,我们建议使用新设计的共同注意模块的多层声学信息提出端到端的语音情感识别系统。首先,我们将分别使用CNN,BilstM和Wav2Vec2提取包括MFCC,频谱图和嵌入式高级声学信息在内的多级声学信息。然后,这些提取的特征被视为多模式输入,并由所提出的共发机制融合。实验是在IEMOCAP数据集上进行的,我们的模型通过两种独立于说话者的交叉验证策略实现了竞争性能。我们的代码可在GitHub上找到。
Speech Emotion Recognition (SER) aims to help the machine to understand human's subjective emotion from only audio information. However, extracting and utilizing comprehensive in-depth audio information is still a challenging task. In this paper, we propose an end-to-end speech emotion recognition system using multi-level acoustic information with a newly designed co-attention module. We firstly extract multi-level acoustic information, including MFCC, spectrogram, and the embedded high-level acoustic information with CNN, BiLSTM and wav2vec2, respectively. Then these extracted features are treated as multimodal inputs and fused by the proposed co-attention mechanism. Experiments are carried on the IEMOCAP dataset, and our model achieves competitive performance with two different speaker-independent cross-validation strategies. Our code is available on GitHub.