论文标题
通过基于约束的领域知识改善深度学习模型:简短调查
Improving Deep Learning Models via Constraint-Based Domain Knowledge: a Brief Survey
论文作者
论文摘要
深度学习(DL)模型证明了自己在各种学习任务上都表现出色,因为它们可以从大型数据集中学习有用的模式。但是,当需要学习非常困难的功能或没有足够的可用培训数据时,纯粹由数据驱动的模型可能会遇到困难。幸运的是,在许多域中,可以检索先验信息,并用于提高DL模型的性能。本文介绍了为整合以约束形式(在DL学习模型中表达的域知识)而设计的方法的首次调查,以提高其性能,尤其是针对深层神经网络。我们确定五个(非少数排斥的)类别,包括注入域知识的主要方法:1)对特征空间进行作用,2)对假设空间进行修改,3)数据增强,4)4)正则化方案,5)受约束学习。
Deep Learning (DL) models proved themselves to perform extremely well on a wide variety of learning tasks, as they can learn useful patterns from large data sets. However, purely data-driven models might struggle when very difficult functions need to be learned or when there is not enough available training data. Fortunately, in many domains prior information can be retrieved and used to boost the performance of DL models. This paper presents a first survey of the approaches devised to integrate domain knowledge, expressed in the form of constraints, in DL learning models to improve their performance, in particular targeting deep neural networks. We identify five (non-mutually exclusive) categories that encompass the main approaches to inject domain knowledge: 1) acting on the features space, 2) modifications to the hypothesis space, 3) data augmentation, 4) regularization schemes, 5) constrained learning.