论文标题
通过麦克风阵列进行视觉监督的说话者检测和定位
Visually Supervised Speaker Detection and Localization via Microphone Array
论文作者
论文摘要
主动扬声器检测(ASD)是一项多模式的任务,旨在确定谁在一组候选人中说话。当前的ASD视听方法通常依赖于视觉上提取的面部轨道(连续面部作物的序列)和各自的单声音音频。但是,他们的召回率通常很低,因为仅包含可见面孔的候选人。单声音音频可能会成功地检测出语音活动的存在,但由于缺乏空间提示,无法定位说话者。我们的解决方案使用麦克风阵列扩展了音频前端。我们结合波束成形技术训练音频卷积神经网络(CNN),以直接在视频帧中回归扬声器的水平位置。我们建议使用预先提取的面部轨道上使用预训练的主动扬声器检测器生成弱标签。我们的管道采用了“学生教师”范式,在该范式上,训练有素的“老师”网络可在视觉上产生伪标记。 “学生”网络是一个音频网络,训练有素,可以生成相同的结果。在推断时,学生网络可以直接从音频输入直接将扬声器定位在视觉帧中。对新收集的数据的实验结果证明,我们的方法极大地胜过其他各种基线以及教师网络本身。它也导致了出色的语音活动探测器。
Active speaker detection (ASD) is a multi-modal task that aims to identify who, if anyone, is speaking from a set of candidates. Current audio-visual approaches for ASD typically rely on visually pre-extracted face tracks (sequences of consecutive face crops) and the respective monaural audio. However, their recall rate is often low as only the visible faces are included in the set of candidates. Monaural audio may successfully detect the presence of speech activity but fails in localizing the speaker due to the lack of spatial cues. Our solution extends the audio front-end using a microphone array. We train an audio convolutional neural network (CNN) in combination with beamforming techniques to regress the speaker's horizontal position directly in the video frames. We propose to generate weak labels using a pre-trained active speaker detector on pre-extracted face tracks. Our pipeline embraces the "student-teacher" paradigm, where a trained "teacher" network is used to produce pseudo-labels visually. The "student" network is an audio network trained to generate the same results. At inference, the student network can independently localize the speaker in the visual frames directly from the audio input. Experimental results on newly collected data prove that our approach significantly outperforms a variety of other baselines as well as the teacher network itself. It results in an excellent speech activity detector too.