论文标题

PNP-REG:学习插件梯度下降的正规梯度

PnP-ReG: Learned Regularizing Gradient for Plug-and-Play Gradient Descent

论文作者

Fermanian, Rita, Pendu, Mikael Le, Guillemot, Christine

论文摘要

即插即用(PNP)框架使得将高级图像deNo的先验集成到优化算法中成为可能,以有效地求解通常以最大后验(MAP)估计问题为例的各种图像恢复任务。乘法乘数的交替方向方法(ADMM)和通过denoing(红色)算法的正则化是这类方法的两个示例,这些示例在图像恢复方面取得了突破。但是,尽管前一种方法仅适用于近端算法,但最近已经证明,当Deoisers缺乏Jacobian对称性时,没有任何正规化解释红色算法,这恰恰是最实际的DeNoisers的情况。据我们所知,没有任何方法来训练直接代表正规器梯度的网络,该网络可以直接用于基于插件的梯度算法中。我们表明,可以在共同训练相应的地图Denoiser的同时训练直接建模MAP正常器梯度的网络。我们在基于梯度的优化方法中使用该网络,并获得与其他通用插件方法相比,获得更好的结果。我们还表明,正规器可以用作展开梯度下降的预训练网络。最后,我们表明,由此产生的Denoiser允许更好地收敛插件ADMM。

The Plug-and-Play (PnP) framework makes it possible to integrate advanced image denoising priors into optimization algorithms, to efficiently solve a variety of image restoration tasks generally formulated as Maximum A Posteriori (MAP) estimation problems. The Plug-and-Play alternating direction method of multipliers (ADMM) and the Regularization by Denoising (RED) algorithms are two examples of such methods that made a breakthrough in image restoration. However, while the former method only applies to proximal algorithms, it has recently been shown that there exists no regularization that explains the RED algorithm when the denoisers lack Jacobian symmetry, which happen to be the case of most practical denoisers. To the best of our knowledge, there exists no method for training a network that directly represents the gradient of a regularizer, which can be directly used in Plug-and-Play gradient-based algorithms. We show that it is possible to train a network directly modeling the gradient of a MAP regularizer while jointly training the corresponding MAP denoiser. We use this network in gradient-based optimization methods and obtain better results comparing to other generic Plug-and-Play approaches. We also show that the regularizer can be used as a pre-trained network for unrolled gradient descent. Lastly, we show that the resulting denoiser allows for a better convergence of the Plug-and-Play ADMM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源