论文标题
细粒度的中心手动对象细分:数据集,模型和应用程序
Fine-Grained Egocentric Hand-Object Segmentation: Dataset, Model, and Applications
论文作者
论文摘要
以自我为中心的视频为人类行为的高保真建模提供了细粒度的信息。手和互动对象是理解观众的行为和意图的关键方面。我们提供了一个标记的数据集,该数据集由11,243个以egentric的图像组成,并在各种日常活动中与手动和物体进行相互作用的每个像素分割标签。我们的数据集是第一个标记详细的手动触点边界的数据集。我们介绍了一种上下文感知的组成数据增强技术,以适应YouTube Egentric的分布式视频。我们表明,我们可靠的手动对象细分模型和数据集可以作为基础工具,以提高或启用几种下游视觉应用程序,包括手部状态分类,视频活动识别,手动对象交互的3D网格重建以及在egcentric Videos中对手柄前景的视频介绍。数据集和代码可用:https://github.com/owenzlz/egohos
Egocentric videos offer fine-grained information for high-fidelity modeling of human behaviors. Hands and interacting objects are one crucial aspect of understanding a viewer's behaviors and intentions. We provide a labeled dataset consisting of 11,243 egocentric images with per-pixel segmentation labels of hands and objects being interacted with during a diverse array of daily activities. Our dataset is the first to label detailed hand-object contact boundaries. We introduce a context-aware compositional data augmentation technique to adapt to out-of-distribution YouTube egocentric video. We show that our robust hand-object segmentation model and dataset can serve as a foundational tool to boost or enable several downstream vision applications, including hand state classification, video activity recognition, 3D mesh reconstruction of hand-object interactions, and video inpainting of hand-object foregrounds in egocentric videos. Dataset and code are available at: https://github.com/owenzlz/EgoHOS