论文标题

音频的BYOL:探索预训练的通用音频表示形式

BYOL for Audio: Exploring Pre-trained General-purpose Audio Representations

论文作者

Niizumi, Daisuke, Takeuchi, Daiki, Ohishi, Yasunori, Harada, Noboru, Kashino, Kunio

论文摘要

预训练的模型在各个领域的现代机器学习系统中作为功能提取器至关重要。在这项研究中,我们假设对一般音频任务有效的表示形式应提供输入声音强大特征的多个方面。为了识别声音,无论扰动(例如变化的音调或音色),这些功能应适合这些扰动。为了满足诸如识别情绪或音乐流派等任务的各种需求,表示形式应提供信息的多个方面,例如本地和全球功能。为了实施我们的原则,我们提出了一种自我监督的学习方法:引导自己的潜在(BYOL)进行音频(BYOL-A,发音为“ Viola”)。 BYOL-A输入声音的预训练表示对音频数据增强的不变,这使学习的表示形式可与声音的扰动进行稳固。尽管BYOL-A编码器结合了本地和全局特征,并计算其统计信息以使表示形式提供多相关信息。结果,学习的表示形式应提供强大而多方面的信息,以满足各种任务的各种需求。与以前的最新方法相比,我们评估了BYOL-A的一般音频任务性能,并且BYOL-A具有普遍性,最佳平均结果为72.4%,最佳voxceleb1结果为57.6%。广泛的消融实验表明,BYOL-A编码器体系结构有助于大多数性能,最终的关键部分求助于BYOL框架和Byol-A增强。我们的代码可从https://github.com/nttcslab/byol-a在线获取以后的研究。

Pre-trained models are essential as feature extractors in modern machine learning systems in various domains. In this study, we hypothesize that representations effective for general audio tasks should provide multiple aspects of robust features of the input sound. For recognizing sounds regardless of perturbations such as varying pitch or timbre, features should be robust to these perturbations. For serving the diverse needs of tasks such as recognition of emotions or music genres, representations should provide multiple aspects of information, such as local and global features. To implement our principle, we propose a self-supervised learning method: Bootstrap Your Own Latent (BYOL) for Audio (BYOL-A, pronounced "viola"). BYOL-A pre-trains representations of the input sound invariant to audio data augmentations, which makes the learned representations robust to the perturbations of sounds. Whereas the BYOL-A encoder combines local and global features and calculates their statistics to make the representation provide multi-aspect information. As a result, the learned representations should provide robust and multi-aspect information to serve various needs of diverse tasks. We evaluated the general audio task performance of BYOL-A compared to previous state-of-the-art methods, and BYOL-A demonstrated generalizability with the best average result of 72.4% and the best VoxCeleb1 result of 57.6%. Extensive ablation experiments revealed that the BYOL-A encoder architecture contributes to most performance, and the final critical portion resorts to the BYOL framework and BYOL-A augmentations. Our code is available online at https://github.com/nttcslab/byol-a for future studies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源