论文标题

SFU-STORE-NAV:用于室内人类导航的多模式数据集

SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation

论文作者

Zhang, Zhitian, Rhim, Jimin, Ahmadi, Taher, Yang, Kefan, Lim, Angelica, Chen, Mo

论文摘要

本文介绍了一组涉及人类参与者和机器人的实验中收集的数据集。一组实验是在加拿大卑诗省伯纳比的Simon Fraser大学的计算科学机器人实验室进行的,其目的是收集包含共同手势,运动和其他行为的数据,这些数据可能表明人类的导航意图与自主机器人导航有关。该实验模拟了一个购物场景,其中人参与者从他/她的购物清单中获取物品,并与已编程以帮助人类参与者进行编程的胡椒机器人进行互动。我们从108名参与者那里收集了视觉数据和运动捕获数据。视觉数据包含实验的实时记录,运动捕获数据包含人类参与者在世界坐标中的位置和方向。该数据集对于机器人技术,机器学习和计算机视觉社区中的研究人员可能很有价值。

This article describes a dataset collected in a set of experiments that involves human participants and a robot. The set of experiments was conducted in the computing science robotics lab in Simon Fraser University, Burnaby, BC, Canada, and its aim is to gather data containing common gestures, movements, and other behaviours that may indicate humans' navigational intent relevant for autonomous robot navigation. The experiment simulates a shopping scenario where human participants come in to pick up items from his/her shopping list and interact with a Pepper robot that is programmed to help the human participant. We collected visual data and motion capture data from 108 human participants. The visual data contains live recordings of the experiments and the motion capture data contains the position and orientation of the human participants in world coordinates. This dataset could be valuable for researchers in the robotics, machine learning and computer vision community.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源