论文标题

神经符号整合:构图的观点

Neural-Symbolic Integration: A Compositional Perspective

论文作者

Tsamoura, Efthymia, Michael, Loizos

论文摘要

尽管神经符号框架的发展取得了重大进展,但如何以\ emph {组成}方式整合神经和符号系统的问题仍然开放。我们的工作旨在通过将这两个系统视为将其作为模块集成到单个体系结构的黑匣子来填补这一空白,而无需对其内部结构和语义进行假设。取而代之的是,我们只期望每个模块公开某些方法来访问模块实施的函数:符号模块公开了用于在给定输入上计算函数输出的扣除方法,以及用于计算给定输出函数输入的绑架方法;神经模块揭示了用于在给定输入上计算函数输出的扣除方法,以及更新给定输入输出训练实例功能的归纳方法。因此,我们能够证明一个符号模块(只要扣除和绑架方法被暴露出来,就可以选择语法和语义的任何选择 - 都可以与神经模块清洁整合,并促进后者的有效训练,实现超过先前工作的经验表现。

Despite significant progress in the development of neural-symbolic frameworks, the question of how to integrate a neural and a symbolic system in a \emph{compositional} manner remains open. Our work seeks to fill this gap by treating these two systems as black boxes to be integrated as modules into a single architecture, without making assumptions on their internal structure and semantics. Instead, we expect only that each module exposes certain methods for accessing the functions that the module implements: the symbolic module exposes a deduction method for computing the function's output on a given input, and an abduction method for computing the function's inputs for a given output; the neural module exposes a deduction method for computing the function's output on a given input, and an induction method for updating the function given input-output training instances. We are, then, able to show that a symbolic module -- with any choice for syntax and semantics, as long as the deduction and abduction methods are exposed -- can be cleanly integrated with a neural module, and facilitate the latter's efficient training, achieving empirical performance that exceeds that of previous work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源