论文标题
用于指导数字助手的循环自适应意图检测
User-in-the-loop Adaptive Intent Detection for Instructable Digital Assistant
论文作者
论文摘要
人们使用数字助手(DAS)与服务或连接的对象互动变得越来越舒适。但是,对于非编程用户,自定义其DA的可用可能性是有限的,并且不包括教助手新任务的可能性。为了充分利用DAS的潜力,用户应该能够通过自然语言(NL)指导助手来自定义助手。为了提供此类功能,应改进传统助手中的NL解释:(1)意图识别系统应能够识别已知意图的新形式,并在用户表达的情况下获得新的意图。 (2)为了适应新意的意图,自然语言理解模块应具有效率,并且不应依靠预验证的模型。相反,系统应在从用户那里学习新意图的过程中连续收集培训数据。在这项工作中,我们提出了AIDME(在多域环境中的自适应意图检测),这是一种自适应的自适应意图检测框架,使助手可以通过随着交互的进展来学习自己的意图来适应其用户。艾滋病建立了意图的曲目,并收集数据来训练语义相似性评估模型,该模型可以区分学习意图并自主发现已知意图的新形式。 AIDME解决了两个主要问题 - 意图学习和用户适应 - 用于指导数字助手。我们通过将与用户的交互作用的模拟与一声学习系统进行比较,证明了AIDME作为独立系统的功能。我们还展示了艾滋病如何平滑地集成到现有的指导数字助手中。
People are becoming increasingly comfortable using Digital Assistants (DAs) to interact with services or connected objects. However, for non-programming users, the available possibilities for customizing their DA are limited and do not include the possibility of teaching the assistant new tasks. To make the most of the potential of DAs, users should be able to customize assistants by instructing them through Natural Language (NL). To provide such functionalities, NL interpretation in traditional assistants should be improved: (1) The intent identification system should be able to recognize new forms of known intents, and to acquire new intents as they are expressed by the user. (2) In order to be adaptive to novel intents, the Natural Language Understanding module should be sample efficient, and should not rely on a pretrained model. Rather, the system should continuously collect the training data as it learns new intents from the user. In this work, we propose AidMe (Adaptive Intent Detection in Multi-Domain Environments), a user-in-the-loop adaptive intent detection framework that allows the assistant to adapt to its user by learning his intents as their interaction progresses. AidMe builds its repertoire of intents and collects data to train a model of semantic similarity evaluation that can discriminate between the learned intents and autonomously discover new forms of known intents. AidMe addresses two major issues - intent learning and user adaptation - for instructable digital assistants. We demonstrate the capabilities of AidMe as a standalone system by comparing it with a one-shot learning system and a pretrained NLU module through simulations of interactions with a user. We also show how AidMe can smoothly integrate to an existing instructable digital assistant.