论文标题

OG-SGG:本体引导场景图生成。传递机器人技术转移学习的案例研究

OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer Learning for Telepresence Robotics

论文作者

Amodeo, Fernando, Caballero, Fernando, Díaz-Rodríguez, Natalia, Merino, Luis

论文摘要

来自图像的场景图生成是对机器人技术等应用的极大兴趣的任务,因为图是表示世界知识并调节诸如视觉询问响应(VQA)之类的任务中的人类机器人相互作用的主要方法。不幸的是,其相应的机器学习领域仍处于起步阶段,目前提供的解决方案在具体使用方案方面并不专注。具体来说,他们没有考虑到有关域世界的现有“专家”知识。为了提供用例情景所需的可靠性水平,这确实是必要的。在本文中,我们提出了一个名为“本体论引导场景图生成(OG-SGG)”的框架的初始近似值,该框架可以使用本体论的形式(特别是使用内部定义的公理)来改善现有基于机器学习的场景图生成器的性能;我们介绍了对远程敏感机器人技术建立的特定情况进行评估的结果。这些结果显示了生成的场景图中的定量和定性改进。

Scene graph generation from images is a task of great interest to applications such as robotics, because graphs are the main way to represent knowledge about the world and regulate human-robot interactions in tasks such as Visual Question Answering (VQA). Unfortunately, its corresponding area of machine learning is still relatively in its infancy, and the solutions currently offered do not specialize well in concrete usage scenarios. Specifically, they do not take existing "expert" knowledge about the domain world into account; and that might indeed be necessary in order to provide the level of reliability demanded by the use case scenarios. In this paper, we propose an initial approximation to a framework called Ontology-Guided Scene Graph Generation (OG-SGG), that can improve the performance of an existing machine learning based scene graph generator using prior knowledge supplied in the form of an ontology (specifically, using the axioms defined within); and we present results evaluated on a specific scenario founded in telepresence robotics. These results show quantitative and qualitative improvements in the generated scene graphs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源