论文标题
通过知识增强的神经模型进行信息丰富的逻辑文本生成
Towards information-rich, logical text generation with knowledge-enhanced neural models
论文作者
论文摘要
文本生成系统已通过深度学习技巧做出了巨大的希望,并已广泛应用于我们的生活中。但是,现有的端到端神经模型会遇到倾向于产生不信息和通用文本的问题,因为它们无法通过背景知识将输入上下文进行。为了解决这个问题,许多研究人员开始考虑将外部知识结合在文本生成系统中,即知识增强的文本生成。知识的挑战增强了文本生成,包括如何从大规模知识基础中选择适当的知识,如何阅读和理解提取的知识以及如何将知识整合到生成过程中。这项调查对知识增强的文本生成系统进行了全面审查,总结了解决这些挑战的研究进展,并提出了一些开放的问题和研究方向。
Text generation system has made massive promising progress contributed by deep learning techniques and has been widely applied in our life. However, existing end-to-end neural models suffer from the problem of tending to generate uninformative and generic text because they cannot ground input context with background knowledge. In order to solve this problem, many researchers begin to consider combining external knowledge in text generation systems, namely knowledge-enhanced text generation. The challenges of knowledge enhanced text generation including how to select the appropriate knowledge from large-scale knowledge bases, how to read and understand extracted knowledge, and how to integrate knowledge into generation process. This survey gives a comprehensive review of knowledge-enhanced text generation systems, summarizes research progress to solving these challenges and proposes some open issues and research directions.