论文标题
Som-Oformer:多人姿势预测变压器
SoMoFormer: Multi-Person Pose Forecasting with Transformers
论文作者
论文摘要
人姿势预测是一个挑战性的问题,涉及复杂的人体运动和姿势动态。在环境中有多个人的情况下,一个人的运动也可能受到他人的运动和动态运动的影响。尽管以前有一些针对多人动态姿势预测问题的作品,但它们通常将整个姿势序列作为时间序列(忽略关节之间的潜在关系)建模,或者仅一次输出一个人的未来姿势序列。在本文中,我们提出了一种新方法,称为社会运动变压器(SOM形态),用于多人3D姿势预测。我们的变压器架构独特地将人类运动输入作为关节序列而不是时间序列建模,从而使我们能够对关节进行注意,同时预测每个关节的整个未来运动序列,并并行。我们表明,通过这种问题重新进行,Somoformer自然会通过将场景中所有人的关节用作输入查询,从而扩展到多人场景。我们的模型使用学识渊博的嵌入来表示关节,人身份和全球地位的类型,了解关节之间和人之间的关系,更强烈地参加了来自同一或附近的人的关节。 Som-Orformer在SOMOF基准以及CMU-MOCAP和MUPOTS-3D数据集上长期运动预测的最先进方法优于最先进的方法。出版后将提供代码。
Human pose forecasting is a challenging problem involving complex human body motion and posture dynamics. In cases that there are multiple people in the environment, one's motion may also be influenced by the motion and dynamic movements of others. Although there are several previous works targeting the problem of multi-person dynamic pose forecasting, they often model the entire pose sequence as time series (ignoring the underlying relationship between joints) or only output the future pose sequence of one person at a time. In this paper, we present a new method, called Social Motion Transformer (SoMoFormer), for multi-person 3D pose forecasting. Our transformer architecture uniquely models human motion input as a joint sequence rather than a time sequence, allowing us to perform attention over joints while predicting an entire future motion sequence for each joint in parallel. We show that with this problem reformulation, SoMoFormer naturally extends to multi-person scenes by using the joints of all people in a scene as input queries. Using learned embeddings to denote the type of joint, person identity, and global position, our model learns the relationships between joints and between people, attending more strongly to joints from the same or nearby people. SoMoFormer outperforms state-of-the-art methods for long-term motion prediction on the SoMoF benchmark as well as the CMU-Mocap and MuPoTS-3D datasets. Code will be made available after publication.