论文标题

逻辑张量网络

Logic Tensor Networks

论文作者

Badreddine, Samy, Garcez, Artur d'Avila, Serafini, Luciano, Spranger, Michael

论文摘要

需要人工智能代理人向周围的环境学习,并推理为做出决定的知识。尽管最新的数据从数据学习通常使用子符号分布式表示形式,但推理通常在更高的抽象水平上使用一阶逻辑语言来进行知识表示有用。结果,将符号AI和神经计算结合到神经符号系统中的尝试一直在增加。在本文中,我们介绍了逻辑张量网络(LTN),这是一种神经成像形式主义和计算模型,通过引入多个价值的,端到端的可区分的一阶逻辑,称为真实逻辑,称为“真实逻辑”作为深度学习的表示。我们表明,LTN提供了一种统一的语言,用于规范和计算多个AI任务,例如数据聚类,多标签分类,关系学习,查询答案,半监督的学习,回归和嵌入学习。我们使用TensorFlow 2。关键字:神经成像AI,深度学习和推理,多重值逻辑,使用许多简单的解释示例来实现和说明上述任务。

Artificial Intelligence agents are required to learn from their surroundings and to reason about the knowledge that has been learned in order to make decisions. While state-of-the-art learning from data typically uses sub-symbolic distributed representations, reasoning is normally useful at a higher level of abstraction with the use of a first-order logic language for knowledge representation. As a result, attempts at combining symbolic AI and neural computation into neural-symbolic systems have been on the increase. In this paper, we present Logic Tensor Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning through the introduction of a many-valued, end-to-end differentiable first-order logic called Real Logic as a representation language for deep learning. We show that LTN provides a uniform language for the specification and the computation of several AI tasks such as data clustering, multi-label classification, relational learning, query answering, semi-supervised learning, regression and embedding learning. We implement and illustrate each of the above tasks with a number of simple explanatory examples using TensorFlow 2. Keywords: Neurosymbolic AI, Deep Learning and Reasoning, Many-valued Logic.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源