论文标题
与对话法的层次结构识别的本地上下文关注
Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition
论文作者
论文摘要
对话法案识别是智能对话系统的基本任务。先前的工作模型整个对话框以预测对话行为,这可能会带来无关句子中的噪音。在这项工作中,我们设计了一个基于自我注意的分层模型,以捕获句子内和句子间信息。我们通过在话语之间结合相对位置信息来修改注意力分布,以关注本地和上下文语义信息。基于发现对话框的长度影响性能的情况,我们引入了一个新的对话框分割机制,以分析在线和离线设置下对话框长度和上下文填充长度的效果。该实验表明,我们的方法在两个数据集上实现了有希望的性能:总机对话法和日常dialog的精度分别为80.34 \%和85.81 \%。注意力重量的可视化表明,我们的方法可以明确地学习话语之间的上下文依赖关系。
Dialogue act recognition is a fundamental task for an intelligent dialogue system. Previous work models the whole dialog to predict dialog acts, which may bring the noise from unrelated sentences. In this work, we design a hierarchical model based on self-attention to capture intra-sentence and inter-sentence information. We revise the attention distribution to focus on the local and contextual semantic information by incorporating the relative position information between utterances. Based on the found that the length of dialog affects the performance, we introduce a new dialog segmentation mechanism to analyze the effect of dialog length and context padding length under online and offline settings. The experiment shows that our method achieves promising performance on two datasets: Switchboard Dialogue Act and DailyDialog with the accuracy of 80.34\% and 85.81\% respectively. Visualization of the attention weights shows that our method can learn the context dependency between utterances explicitly.