论文标题
视频一致性和传播的深度视频事先
Deep Video Prior for Video Consistency and Propagation
论文作者
论文摘要
独立地将图像处理算法应用到每个视频框架上通常会导致结果视频中的时间不一致。为了解决这个问题,我们提出了一种新颖的盲目视频时间一致性方法。我们的方法仅在直接的一对原始和处理的视频上而不是大型数据集进行培训。与大多数以前具有与光流有关时间一致性的方法不同,我们表明可以通过在带有深视频先验(DVP)视频的卷积神经网络(DVP)上训练卷积神经网络来实现时间一致性。此外,提出了经过精心设计的迭代重新加权培训策略,以解决具有挑战性的多模式不一致问题。我们证明了方法对视频的7个计算机视觉任务的有效性。广泛的定量和感知实验表明,我们的方法比盲目视频时间一致性的最先进方法获得了更高的性能。我们进一步将DVP扩展到视频传播,并在传播三种不同类型的信息(颜色,艺术风格和对象细分)方面证明了其有效性。还提出了具有伪标签的渐进式传播策略,以增强DVP在视频传播中的性能。我们的源代码可在https://github.com/chenyanglei/deep-video-prior上公开获得。
Applying an image processing algorithm independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional neural network on a video with Deep Video Prior (DVP). Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency. We further extend DVP to video propagation and demonstrate its effectiveness in propagating three different types of information (color, artistic style, and object segmentation). A progressive propagation strategy with pseudo labels is also proposed to enhance DVP's performance on video propagation. Our source codes are publicly available at https://github.com/ChenyangLEI/deep-video-prior.