论文标题

在ASR中重新思考评估:我们的模型足够强大吗?

Rethinking Evaluation in ASR: Are Our Models Robust Enough?

论文作者

Likhomanenko, Tatiana, Xu, Qiantong, Pratap, Vineel, Tomasello, Paden, Kahn, Jacob, Avidov, Gilad, Collobert, Ronan, Synnaeve, Gabriel

论文摘要

在自动语音识别方面有价值的单个基准测试中推动数字吗?通常根据单个数据集的性能评估声学建模的研究结果。尽管研究界已经围绕各种基准结合,但我们着手了解跨数据集的声学建模中的概括性能,特别是如果在单个数据集中培训的模型转移到其他(可能是偏置)数据集的模型。我们表明,总的来说,回响和加性噪声增强可改善范围内的概括性能。此外,我们证明,当使用足够大的基准测试时,它们的平均单词错误率(WER)性能为现实世界噪声数据的性能提供了良好的代理。最后,我们表明,在最广泛使用的数据集中培训单个声学模型 - 合并 - 在研究和现实世界基准上都达到了竞争性能。

Is pushing numbers on a single benchmark valuable in automatic speech recognition? Research results in acoustic modeling are typically evaluated based on performance on a single dataset. While the research community has coalesced around various benchmarks, we set out to understand generalization performance in acoustic modeling across datasets - in particular, if models trained on a single dataset transfer to other (possibly out-of-domain) datasets. We show that, in general, reverberative and additive noise augmentation improves generalization performance across domains. Further, we demonstrate that when a large enough set of benchmarks is used, average word error rate (WER) performance over them provides a good proxy for performance on real-world noisy data. Finally, we show that training a single acoustic model on the most widely-used datasets - combined - reaches competitive performance on both research and real-world benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源