论文标题

学习增强现实的可视化政策

Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration

论文作者

Chandan, Kishan, Albertson, Jack, Zhang, Shiqi

论文摘要

在人机协作领域中,增强现实(AR)技术使人们能够可视化机器人的状态。当前基于AR的可视化策略是手动设计的,需要大量的人类努力和领域知识。当信息看到太少时,人类用户发现AR界面无用。当看到过多的信息可视化时,他们发现很难处理可视化的信息。在本文中,我们开发了一个称为Varil的框架,该框架使AR代理能够从演示中学习可视化策略(可视化,何时以及如何可视化)。我们创建了一个基于团结的平台,用于模拟仓库环境,其中人类机器人队友在交付任务上进行了协作。我们收集了一个数据集,其中包括可视化机器人当前和计划的行为的演示。与真实人类参与者实验的结果表明,与文献中的竞争基线相比,我们学到的可视化策略显着提高了人类机器人团队的效率,同时降低了人类用户的干扰水平。 Varil已在一个内置的模拟仓库中证明。

In human-robot collaboration domains, augmented reality (AR) technologies have enabled people to visualize the state of robots. Current AR-based visualization policies are designed manually, which requires a lot of human efforts and domain knowledge. When too little information is visualized, human users find the AR interface not useful; when too much information is visualized, they find it difficult to process the visualized information. In this paper, we develop a framework, called VARIL, that enables AR agents to learn visualization policies (what to visualize, when, and how) from demonstrations. We created a Unity-based platform for simulating warehouse environments where human-robot teammates collaborate on delivery tasks. We have collected a dataset that includes demonstrations of visualizing robots' current and planned behaviors. Results from experiments with real human participants show that, compared with competitive baselines from the literature, our learned visualization strategies significantly increase the efficiency of human-robot teams, while reducing the distraction level of human users. VARIL has been demonstrated in a built-in-lab mock warehouse.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源