论文标题

通过配对数据增强来学习端到端的动作互动

Learning End-to-End Action Interaction by Paired-Embedding Data Augmentation

论文作者

Song, Ziyang, Yuan, Zejian, Zhang, Chong, Chi, Wanchao, Ling, Yonggen, Zhang, Shenghao

论文摘要

在基于识别的动作相互作用中,机器人对人类行为的反应通常是根据公认类别预先设计的,因此僵硬。在本文中,我们指定了一个新的交互式动作翻译(IAT)任务,该任务旨在从未标记的交互式对学习端到端的操作交互,从而消除明确的动作识别。为了学习小规模数据,我们提出了一种配对的装饰方法(PE)方法,以进行有效和可靠的数据增强。具体而言,我们的方法首先利用配对关系将单个动作聚集在嵌入空间中。然后,最初配对的两个动作可以用各自社区中的其他动作替换为新对。基于条件GAN的ACT2ACT网络以增强数据学习。此外,提出了专门提出IAT检验和IAT训练分数,以评估我们的任务方法。两个数据集的实验结果显示出令人印象深刻的效果和我们方法的广泛应用前景。

In recognition-based action interaction, robots' responses to human actions are often pre-designed according to recognized categories and thus stiff. In this paper, we specify a new Interactive Action Translation (IAT) task which aims to learn end-to-end action interaction from unlabeled interactive pairs, removing explicit action recognition. To enable learning on small-scale data, we propose a Paired-Embedding (PE) method for effective and reliable data augmentation. Specifically, our method first utilizes paired relationships to cluster individual actions in an embedding space. Then two actions originally paired can be replaced with other actions in their respective neighborhood, assembling into new pairs. An Act2Act network based on conditional GAN follows to learn from augmented data. Besides, IAT-test and IAT-train scores are specifically proposed for evaluating methods on our task. Experimental results on two datasets show impressive effects and broad application prospects of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源