论文标题

普遍量化的神经压缩

Universally Quantized Neural Compression

论文作者

Agustsson, Eirikur, Theis, Lucas

论文摘要

一种流行的学习编码器方法是有损压缩的方法,是在训练过程中使用添加剂均匀的噪声作为测试时间量化的可区分近似值。我们证明,也可以使用通用量化在测试时间实现均匀的噪声通道(ZIV,1985)。这使我们能够消除训练阶段和测试阶段之间的不匹配,同时保持完全可区分的损失函数。实施统一的噪声通道是传达样本的更一般问题的特例,如果我们不对它的分布做出假设,我们在计算上很难证明。但是,从实际角度来看,统一的特殊情况既有效又易于实施,因此引起了极大的兴趣。最后,我们表明量化可以作为应用于均匀噪声通道的软量化器的限制情况,并在有或没有量化的情况下桥接压缩。

A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization. We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985). This allows us to eliminate the mismatch between training and test phases while maintaining a completely differentiable loss function. Implementing the uniform noise channel is a special case of the more general problem of communicating a sample, which we prove is computationally hard if we do not make assumptions about its distribution. However, the uniform special case is efficient as well as easy to implement and thus of great interest from a practical point of view. Finally, we show that quantization can be obtained as a limiting case of a soft quantizer applied to the uniform noise channel, bridging compression with and without quantization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源