论文标题

指导-TTS 2:具有未转录数据的高质量自适应文本到语音的扩散模型

Guided-TTS 2: A Diffusion Model for High-quality Adaptive Text-to-Speech with Untranscribed Data

论文作者

Kim, Sungwon, Kim, Heeseung, Yoon, Sungroh

论文摘要

我们建议使用未转录的数据为高质量自适应TTS提供指导-TTS 2,这是一种基于扩散的生成模型。指导-TTS 2结合了扬声器条件扩散模型与依赖说话者的音素分类器,用于自适应文本到语音。我们在大规模的未转录数据集上训练说话者条件扩散模型,以获取无分类器的指导方法,并在目标扬声器的参考语音适应中进一步调整了扩散模型,仅需40秒钟。我们证明,指导-TTS 2在语音质量和扬声器相似性方面显示出与高质量的单扬声器TTS基线相当的性能,并且只有十秒钟的未转录数据。我们进一步表明,指导-TTS 2在多扬声器数据集上优于自适应TTS基线,即使使用零弹性自适应设置。指导tts 2只能使用未转录的语音来适应多种声音,这可以使自适应tts具有非人类角色的声音,例如\ textit中的gollum {“戒指之王”}。

We propose Guided-TTS 2, a diffusion-based generative model for high-quality adaptive TTS using untranscribed data. Guided-TTS 2 combines a speaker-conditional diffusion model with a speaker-dependent phoneme classifier for adaptive text-to-speech. We train the speaker-conditional diffusion model on large-scale untranscribed datasets for a classifier-free guidance method and further fine-tune the diffusion model on the reference speech of the target speaker for adaptation, which only takes 40 seconds. We demonstrate that Guided-TTS 2 shows comparable performance to high-quality single-speaker TTS baselines in terms of speech quality and speaker similarity with only a ten-second untranscribed data. We further show that Guided-TTS 2 outperforms adaptive TTS baselines on multi-speaker datasets even with a zero-shot adaptation setting. Guided-TTS 2 can adapt to a wide range of voices only using untranscribed speech, which enables adaptive TTS with the voice of non-human characters such as Gollum in \textit{"The Lord of the Rings"}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源