论文标题

视频超分辨率的框架间的时间一致性学习

Temporal Consistency Learning of inter-frames for Video Super-Resolution

论文作者

Liu, Meiqin, Jin, Shuo, Yao, Chao, Lin, Chunyu, Zhao, Yao

论文摘要

视频超分辨率(VSR)是一项旨在从低分辨率(LR)参考框架和多个相邻帧重建高分辨率(HR)帧的任务。至关重要的操作是利用当前帧重建的相对未对准帧并保留结果的一致性。现有方法通常探索信息传播和框架对齐,以提高VSR的性能。但是,很少有研究集中在框架间的时间一致性上。在本文中,我们建议以端到端的方式为VSR提出一个时间一致性学习网络(TCNET),以增强重建视频的一致性。时空稳定模块旨在从框架间学习自我调整。特别是,使用相关匹配来利用从每个帧的空间依赖性来维持结构稳定性。此外,利用一种自我注意力的机制来学习时间对应关系,以实施自适应翘曲操作,以实现多框架之间的时间一致性。此外,混合复发架构旨在利用短期和长期信息。我们进一步提出了一个进行性融合模块,以执行时空特征的多阶段融合。最终的重建帧通过这些融合功能来完善。各种实验的客观和主观结果表明,与几种最先进的方法相比,TCNET在不同基准数据集上具有较高的性能。

Video super-resolution (VSR) is a task that aims to reconstruct high-resolution (HR) frames from the low-resolution (LR) reference frame and multiple neighboring frames. The vital operation is to utilize the relative misaligned frames for the current frame reconstruction and preserve the consistency of the results. Existing methods generally explore information propagation and frame alignment to improve the performance of VSR. However, few studies focus on the temporal consistency of inter-frames. In this paper, we propose a Temporal Consistency learning Network (TCNet) for VSR in an end-to-end manner, to enhance the consistency of the reconstructed videos. A spatio-temporal stability module is designed to learn the self-alignment from inter-frames. Especially, the correlative matching is employed to exploit the spatial dependency from each frame to maintain structural stability. Moreover, a self-attention mechanism is utilized to learn the temporal correspondence to implement an adaptive warping operation for temporal consistency among multi-frames. Besides, a hybrid recurrent architecture is designed to leverage short-term and long-term information. We further present a progressive fusion module to perform a multistage fusion of spatio-temporal features. And the final reconstructed frames are refined by these fused features. Objective and subjective results of various experiments demonstrate that TCNet has superior performance on different benchmark datasets, compared to several state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源