论文标题
Adavits:低计算资源扬声器适应的微小VIT
AdaVITS: Tiny VITS for Low Computing Resource Speaker Adaptation
论文作者
论文摘要
文本到语音综合(TTS)中的扬声器适应性是对预先训练的TTS模型进行修复,以适应具有有限数据的新目标扬声器。尽管为这项任务付出了很多努力,但由于轻量级模型的要求和计算复杂性较小,很少为低计算资源方案进行工作。在本文中,提出了一种基于VITS的小型TTS模型,名为Adavits,用于低计算资源扬声器的适应性。为了有效地降低VIT的参数和计算复杂性,提出了基于ISTFT的波浪结构解码器来替换基于UPSMPLING的解码器,该解码器在原始VIT中资源消耗。此外,还引入了纳米流以共享跨流量块的密度估计,以减少先前编码器的参数。此外,为了降低文本编码器的计算复杂性,缩放点的注意力被线性注意所取代。为了处理由简化模型引起的不稳定性,而不是使用原始文本编码器,而是通过文本对PPG模块将语音后验(PPG)用作语言功能,然后将其用作编码器的输入。实验表明,Adavits可以使用897万个模型参数和0.72Gflops计算复杂性来产生稳定的自然语音。
Speaker adaptation in text-to-speech synthesis (TTS) is to finetune a pre-trained TTS model to adapt to new target speakers with limited data. While much effort has been conducted towards this task, seldom work has been performed for low computational resource scenarios due to the challenges raised by the requirement of the lightweight model and less computational complexity. In this paper, a tiny VITS-based TTS model, named AdaVITS, for low computing resource speaker adaptation is proposed. To effectively reduce parameters and computational complexity of VITS, an iSTFT-based wave construction decoder is proposed to replace the upsampling-based decoder which is resource-consuming in the original VITS. Besides, NanoFlow is introduced to share the density estimate across flow blocks to reduce the parameters of the prior encoder. Furthermore, to reduce the computational complexity of the textual encoder, scaled-dot attention is replaced with linear attention. To deal with the instability caused by the simplified model, instead of using the original text encoder, phonetic posteriorgram (PPG) is utilized as linguistic feature via a text-to-PPG module, which is then used as input for the encoder. Experiment shows that AdaVITS can generate stable and natural speech in speaker adaptation with 8.97M model parameters and 0.72GFlops computational complexity.