论文标题
Voxceleb扬声器识别挑战2022
The Royalflush System for VoxCeleb Speaker Recognition Challenge 2022
论文作者
论文摘要
在这份技术报告中,我们描述了Voxceleb演讲者识别挑战2022(VOXSRC-22)的Royalflush提交。我们的意见书包含曲目1,该曲目1用于监督的说话者验证和曲目3,该曲目适用于半监视的演讲者验证。对于轨道1,我们开发了具有对称体系结构的功能强大的基于U-NET的扬声器嵌入提取器。拟议的系统在验证集上获得了EER的2.06%,而MindCF则达到0.1293。与最先进的ECAPA-TDNN相比,它在EER中获得了20.7%的相对提高,而MindCF的相对提高了22.70%。对于轨道3,我们采用了源域监督和目标域自学的联合培训,以获取扬声器嵌入提取器。随后的聚类过程可以获得目标域伪扬声器标签。我们使用所有源和目标域数据以有监督的方式适应说话者嵌入提取器,从而可以充分利用这两个域信息。此外,可以重复聚类和监督域的适应性,直到验证集对性能收敛为止。我们的最终提交是融合了10种型号,并在验证集上实现了7.75%EER和0.3517 MindCF。
In this technical report, we describe the Royalflush submissions for the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22). Our submissions contain track 1, which is for supervised speaker verification and track 3, which is for semi-supervised speaker verification. For track 1, we develop a powerful U-Net-based speaker embedding extractor with a symmetric architecture. The proposed system achieves 2.06% in EER and 0.1293 in MinDCF on the validation set. Compared with the state-of-the-art ECAPA-TDNN, it obtains a relative improvement of 20.7% in EER and 22.70% in MinDCF. For track 3, we employ the joint training of source domain supervision and target domain self-supervision to get a speaker embedding extractor. The subsequent clustering process can obtain target domain pseudo-speaker labels. We adapt the speaker embedding extractor using all source and target domain data in a supervised manner, where it can fully leverage both domain information. Moreover, clustering and supervised domain adaptation can be repeated until the performance converges on the validation set. Our final submission is a fusion of 10 models and achieves 7.75% EER and 0.3517 MinDCF on the validation set.