论文标题

绘制VMAF的框架,其概率在视频编码配方之间明显差异

A Framework to Map VMAF with the Probability of Just Noticeable Difference between Video Encoding Recipes

论文作者

Zhu, Jingwen, Ling, Suiyi, Baveye, Yoann, Callet, Patrick Le

论文摘要

基于人类视觉系统(HVS)开发的明显差异(JND)模型对于许多多媒体用例都很有价值。在流媒体行业中,在选择视频编码配方时,通常应用它可以在压缩效率和感知质量之间达到良好的平衡。尽管如此,最近基于深度学习的JND预测模型却取决于大规模的JND地面真相,这些真相昂贵且耗时。现有的大多数JND数据集都包含有限的内容,并且仅限于某个编解码器(例如H264)。结果,在此类数据集上训练的JND预测模型通常不是编解码器的不可知论。为此,为了将编码配方和JND估计分解,我们提出了一个新颖的框架来绘制客观视频质量评估(VQA)分数的差异,即VMAF在两个给定的视频之间编码不同的编码配方从相同内容中编码的两个给定视频之间,从而使它们之间的差异很明显。提出的概率映射模型从DCR测试数据中学到了,与标准JND主观测试相比,该模型的价格明显便宜。当我们利用目标VQA指标(例如,用不同编解码器编码的内容培训的VMAF)作为代理估计JND的代理,我们的模型对编解码器不可知,并且在计算上有效。在整个广泛的实验中,已证明所提出的模型能够有效地估计JND值。

Just Noticeable Difference (JND) model developed based on Human Vision System (HVS) through subjective studies is valuable for many multimedia use cases. In the streaming industries, it is commonly applied to reach a good balance between compression efficiency and perceptual quality when selecting video encoding recipes. Nevertheless, recent state-of-the-art deep learning based JND prediction model relies on large-scale JND ground truth that is expensive and time consuming to collect. Most of the existing JND datasets contain limited number of contents and are limited to a certain codec (e.g., H264). As a result, JND prediction models that were trained on such datasets are normally not agnostic to the codecs. To this end, in order to decouple encoding recipes and JND estimation, we propose a novel framework to map the difference of objective Video Quality Assessment (VQA) scores, i.e., VMAF, between two given videos encoded with different encoding recipes from the same content to the probability of having just noticeable difference between them. The proposed probability mapping model learns from DCR test data, which is significantly cheaper compared to standard JND subjective test. As we utilize objective VQA metric (e.g., VMAF that trained with contents encoded with different codecs) as proxy to estimate JND, our model is agnostic to codecs and computationally efficient. Throughout extensive experiments, it is demonstrated that the proposed model is able to estimate JND values efficiently.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源