论文标题
使用知识蒸馏
Low-resource Low-footprint Wake-word Detection using Knowledge Distillation
论文作者
论文摘要
随着虚拟助手变得越来越多样化和专业化,对应用或特定品牌唤醒单词的需求也是如此。但是,通常用于训练尾流检测器的特定于唤醒特定的数据集是昂贵的。在本文中,我们探讨了两种技术来利用大型唱机语音识别的声学建模数据来改善专用的尾流探测器:转移学习和知识蒸馏。我们还探讨了这些技术如何与时间同步训练目标相互作用以提高检测潜伏期。实验显示在开源“ Hey Snips”数据集中,并且内部远场数据集更具挑战性。使用大型声学模型中的电话同步目标和知识蒸馏,我们能够提高两个数据集的数据集尺寸的精度,同时降低延迟。
As virtual assistants have become more diverse and specialized, so has the demand for application or brand-specific wake words. However, the wake-word-specific datasets typically used to train wake-word detectors are costly to create. In this paper, we explore two techniques to leverage acoustic modeling data for large-vocabulary speech recognition to improve a purpose-built wake-word detector: transfer learning and knowledge distillation. We also explore how these techniques interact with time-synchronous training targets to improve detection latency. Experiments are presented on the open-source "Hey Snips" dataset and a more challenging in-house far-field dataset. Using phone-synchronous targets and knowledge distillation from a large acoustic model, we are able to improve accuracy across dataset sizes for both datasets while reducing latency.