论文标题
Proxemo:基于步态的情绪学习和社会意识的机器人导航的多视图邻近融合
ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation
论文作者
论文摘要
我们提出了ProxeMo,这是一种新颖的端到端情感预测算法,用于行人之间具有社会意识的机器人导航。我们的方法可以预测步行步态中行人的感知情绪,然后考虑到社会和邻近的限制,该步行随后被用于情感引导的导航。为了对情绪进行分类,我们提出了一个基于多视图骨骼图卷积的模型,该模型可用于安装在移动机器人上的商品摄像头上。我们的情绪识别被整合到无MAP导航方案中,并且对行人运动的环境没有任何假设。它在情绪基准测试数据集上达到了平均平均情感预测精度为82.47%。对于3D步态的情感识别,我们的表现优于当前最新算法。我们使用ClearPath Jackal机器人在室内场景中的导航方面强调了它的好处。
We present ProxEmo, a novel end-to-end emotion prediction algorithm for socially aware robot navigation among pedestrians. Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation taking into account social and proxemic constraints. To classify emotions, we propose a multi-view skeleton graph convolution-based model that works on a commodity camera mounted onto a moving robot. Our emotion recognition is integrated into a mapless navigation scheme and makes no assumptions about the environment of pedestrian motion. It achieves a mean average emotion prediction precision of 82.47% on the Emotion-Gait benchmark dataset. We outperform current state-of-art algorithms for emotion recognition from 3D gaits. We highlight its benefits in terms of navigation in indoor scenes using a Clearpath Jackal robot.