论文标题

GL-RG:视频字幕的全球本地表示粒度

GL-RG: Global-Local Representation Granularity for Video Captioning

论文作者

Yan, Liqi, Wang, Qifan, Cui, Yiming, Feng, Fuli, Quan, Xiaojun, Zhang, Xiangyu, Liu, Dongfang

论文摘要

视频字幕是一项具有挑战性的任务,因为它需要将视觉理解准确地转换为自然语言描述。迄今为止,最新的方法不足地建模了整个视频框架的全球本地表示形式,以创造字幕,留出了很大的改进空间。在这项工作中,我们从新的角度了解视频字幕任务,并为视频字幕提出了一个GL-RG框架,即\ textbf {g} lobal- \ textbf {l} ocal \ textbf {r} epresentation \ epresentation \ textbf \ textbf {g} ranularity。我们的GL-RG在先前的工作中证明了三个优点:1)我们从不同的视频范围中明确利用广泛的视觉表示以改善语言表达; 2)我们设计了一种新型的全球本地编码器,以产生丰富的语义词汇,以获得跨帧中视频内容的描述性粒度; 3)我们制定了一种增量培训策略,该策略以增量方式组织模型学习以产生最佳字幕行为。关于挑战性的MSR-VTT和MSVD数据集的实验结果表明,我们的DL-RG的表现优于最近的最新方法。代码可在\ url {https://github.com/ylqi/gl-rg}上找到。

Video captioning is a challenging task as it needs to accurately transform visual understanding into natural language description. To date, state-of-the-art methods inadequately model global-local representation across video frames for caption generation, leaving plenty of room for improvement. In this work, we approach the video captioning task from a new perspective and propose a GL-RG framework for video captioning, namely a \textbf{G}lobal-\textbf{L}ocal \textbf{R}epresentation \textbf{G}ranularity. Our GL-RG demonstrates three advantages over the prior efforts: 1) we explicitly exploit extensive visual representations from different video ranges to improve linguistic expression; 2) we devise a novel global-local encoder to produce rich semantic vocabulary to obtain a descriptive granularity of video contents across frames; 3) we develop an incremental training strategy which organizes model learning in an incremental fashion to incur an optimal captioning behavior. Experimental results on the challenging MSR-VTT and MSVD datasets show that our DL-RG outperforms recent state-of-the-art methods by a significant margin. Code is available at \url{https://github.com/ylqi/GL-RG}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源