论文标题

超越视频的静态特征,用于暂时一致的3D人姿势和形状

Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video

论文作者

Choi, Hongsuk, Moon, Gyeongsik, Chang, Ju Yong, Lee, Kyoung Mu

论文摘要

尽管最新的基于单图的3D人姿势和形状估计方法取得了成功,但从视频中恢复了时间一致和平稳的3D人类运动仍然具有挑战性。已经提出了几种基于视频的方法;但是,由于对当前帧的静态特征的强烈依赖,他们无法解析基于单个图像的方法的时间不一致问题。在这方面,我们提出了一个时间一致的网格恢复系统(TCMR)。它有效地关注了过去和将来的框架的时间信息,而不会受到当前静态功能的控制。我们的TCMR在时间一致性方面显着优于先前的基于视频的方法,并且每架3D姿势和形状精度更好。我们还发布了代码。有关演示视频,请参见https://youtu.be/wb3ntnsqdii。有关代码,请参见https://github.com/hongsukchoi/tcmr_release。

Despite the recent success of single image-based 3D human pose and shape estimation methods, recovering temporally consistent and smooth 3D human motion from a video is still challenging. Several video-based methods have been proposed; however, they fail to resolve the single image-based methods' temporal inconsistency issue due to a strong dependency on a static feature of the current frame. In this regard, we present a temporally consistent mesh recovery system (TCMR). It effectively focuses on the past and future frames' temporal information without being dominated by the current static feature. Our TCMR significantly outperforms previous video-based methods in temporal consistency with better per-frame 3D pose and shape accuracy. We also release the codes. For the demo video, see https://youtu.be/WB3nTnSQDII. For the codes, see https://github.com/hongsukchoi/TCMR_RELEASE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源