论文标题
SECO:探索无监督的代表学习序列监督
SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning
论文作者
论文摘要
创新和突破的稳定势头令人信服地推动了无监督的图像表示学习的局限性。与静态2D图像相比,视频具有另一个维度(时间)。这种顺序结构中存在的固有监督为建立无监督的学习模型提供了肥沃的基础。在本文中,我们构成了一部三部曲,该三部曲是从空间,时空和顺序的角度探索序列中基本和通用监督的三部曲。我们通过确定一对样本是来自一个帧还是来自一个视频,以及样本三联样本是否按正确的时间顺序进行实现,从而实现了监督信号。我们独特地将这些信号视为对比学习中的基础,并得出了一种名为“序列对比度学习”(SECO)的特定形式。 SECO在线性识别方案(动力学),未修剪活性识别(活动网络)和对象跟踪(OTB-100)下显示出了卓越的结果。更值得注意的是,SECO对最近无监督的预训练技术表现出很大的改进,并且在UCF101和HMDB51上分别针对完全监督的Imavered Imagenet预培训,将准确性提高了2.96%和6.47%。源代码可在\ url {https://github.com/yihengzhang-cv/seco-sequence-contrastive-learning}获得。
A steady momentum of innovations and breakthroughs has convincingly pushed the limits of unsupervised image representation learning. Compared to static 2D images, video has one more dimension (time). The inherent supervision existing in such sequential structure offers a fertile ground for building unsupervised learning models. In this paper, we compose a trilogy of exploring the basic and generic supervision in the sequence from spatial, spatiotemporal and sequential perspectives. We materialize the supervisory signals through determining whether a pair of samples is from one frame or from one video, and whether a triplet of samples is in the correct temporal order. We uniquely regard the signals as the foundation in contrastive learning and derive a particular form named Sequence Contrastive Learning (SeCo). SeCo shows superior results under the linear protocol on action recognition (Kinetics), untrimmed activity recognition (ActivityNet) and object tracking (OTB-100). More remarkably, SeCo demonstrates considerable improvements over recent unsupervised pre-training techniques, and leads the accuracy by 2.96% and 6.47% against fully-supervised ImageNet pre-training in action recognition task on UCF101 and HMDB51, respectively. Source code is available at \url{https://github.com/YihengZhang-CV/SeCo-Sequence-Contrastive-Learning}.