论文标题
ActFormer:基于GAN的变压器朝向一般行动条件的3D人类运动产生
ActFormer: A GAN-based Transformer towards General Action-Conditioned 3D Human Motion Generation
论文作者
论文摘要
我们提出了一种基于GAN的变压器,用于一般行动条件的3D人类运动产生,不仅包括单人行动,还包括多人的互动动作。我们的方法包括一个强大的动作条件运动变压器(ACTFORMER)在GAN培训方案下,配备了高斯流程的潜在过程。这样的设计结合了变压器的强烈时空表示能力,GAN生成建模的优势以及潜在的先验的固有时间相关性。此外,ActFormer可以通过交替建模时间相关性和与变压器编码器的人类相互作用来自然扩展到多人运动。为了进一步促进对多人运动产生的研究,我们引入了一个新的复杂多人战斗行为的合成数据集。在NTU-13,NTU RGB+D 120,Babel和拟议的战斗数据集上进行了广泛的实验表明,我们的方法可以适应各种人类运动表示,并在单人和多人运动生产任务上实现优于先进的方法,这表明了迈向一般人类运动生成器的有希望的步骤。
We present a GAN-based Transformer for general action-conditioned 3D human motion generation, including not only single-person actions but also multi-person interactive actions. Our approach consists of a powerful Action-conditioned motion TransFormer (ActFormer) under a GAN training scheme, equipped with a Gaussian Process latent prior. Such a design combines the strong spatio-temporal representation capacity of Transformer, superiority in generative modeling of GAN, and inherent temporal correlations from the latent prior. Furthermore, ActFormer can be naturally extended to multi-person motions by alternately modeling temporal correlations and human interactions with Transformer encoders. To further facilitate research on multi-person motion generation, we introduce a new synthetic dataset of complex multi-person combat behaviors. Extensive experiments on NTU-13, NTU RGB+D 120, BABEL and the proposed combat dataset show that our method can adapt to various human motion representations and achieve superior performance over the state-of-the-art methods on both single-person and multi-person motion generation tasks, demonstrating a promising step towards a general human motion generator.