论文标题
神经视频压缩的混合时空熵建模
Hybrid Spatial-Temporal Entropy Modelling for Neural Video Compression
论文作者
论文摘要
对于神经视频编解码器,设计有效的熵模型是至关重要但又具有挑战性的,该模型可以准确预测量化潜在表示的概率分布。但是,大多数现有的视频编解码器直接使用图像编解码器的现成的熵模型来编码残差或运动,并且不会完全利用视频中的时空特性。为此,本文提出了一个强大的熵模型,该模型有效地捕获了空间和时间依赖性。特别是,我们介绍了潜在的先验,这些先验利用了潜在表示之间的相关性来挤压时间冗余。同时,提出了双重空间先验,以平行友好的方式降低空间冗余。此外,我们的熵模型也是通用的。除了估计概率分布外,我们的熵模型还在空间通道上生成量化步骤。这种内容自适应的量化机制不仅有助于我们的编解码器在单个模型中实现平稳的速率调整,而且还通过动态位分配来改善最终率延伸性能。实验结果表明,与H.266(VTM)相比,使用最高的压缩率配置,我们的神经编解码器在提议的熵模型中可以在UVG数据集上节省18.2%的比特率。它在神经视频编解码器的开发中是一个新的里程碑。这些代码位于https://github.com/microsoft/dcvc上。
For neural video codec, it is critical, yet challenging, to design an efficient entropy model which can accurately predict the probability distribution of the quantized latent representation. However, most existing video codecs directly use the ready-made entropy model from image codec to encode the residual or motion, and do not fully leverage the spatial-temporal characteristics in video. To this end, this paper proposes a powerful entropy model which efficiently captures both spatial and temporal dependencies. In particular, we introduce the latent prior which exploits the correlation among the latent representation to squeeze the temporal redundancy. Meanwhile, the dual spatial prior is proposed to reduce the spatial redundancy in a parallel-friendly manner. In addition, our entropy model is also versatile. Besides estimating the probability distribution, our entropy model also generates the quantization step at spatial-channel-wise. This content-adaptive quantization mechanism not only helps our codec achieve the smooth rate adjustment in single model but also improves the final rate-distortion performance by dynamic bit allocation. Experimental results show that, powered by the proposed entropy model, our neural codec can achieve 18.2% bitrate saving on UVG dataset when compared with H.266 (VTM) using the highest compression ratio configuration. It makes a new milestone in the development of neural video codec. The codes are at https://github.com/microsoft/DCVC.