论文标题
使用DINOISER中的先验隐式求解线性逆问题
Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser
论文作者
论文摘要
先前的概率模型是许多图像处理问题的基本组成部分,但是对于诸如摄影图像之类的高维信号而言,密度估计很难。深度神经网络为诸如Denoising等问题提供了最先进的解决方案,这些解决方案隐含地依赖于自然图像的先前概率模型。在这里,我们开发了一种强大而一般的方法来利用此隐含的先验。我们依赖于宫川(Miyasawa,1961)引起的统计结果,他表明可以直接根据噪声信号密度的对数的梯度直接编写去除加性高斯噪声的最小二乘解决方案。我们利用这一事实来开发一种随机的粗到十个梯度上升程序,以从被嵌入的CNN中的隐式先验中绘制高概率的样本,训练了盲目(即,噪声水平未知)最小二乘deNosing。该算法对约束采样的概括提供了一种在求解任何线性反问题之前使用隐式的方法,而没有其他培训。我们使用相同的算法在多个应用程序中展示了这种转移学习的一般形式,以产生最先进的无监督性能水平,用于去除,超分辨率,介入和压缩感应。
Prior probability models are a fundamental component of many image processing problems, but density estimation is notoriously difficult for high-dimensional signals such as photographic images. Deep neural networks have provided state-of-the-art solutions for problems such as denoising, which implicitly rely on a prior probability model of natural images. Here, we develop a robust and general methodology for making use of this implicit prior. We rely on a statistical result due to Miyasawa (1961), who showed that the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this fact to develop a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind (i.e., with unknown noise level) least-squares denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem, with no additional training. We demonstrate this general form of transfer learning in multiple applications, using the same algorithm to produce state-of-the-art levels of unsupervised performance for deblurring, super-resolution, inpainting, and compressive sensing.