论文标题

一个尺寸适合所有人:我们可以训练一个Denoiser以达到所有噪声水平吗?

One Size Fits All: Can We Train One Denoiser for All Noise Levels?

论文作者

Gnansambandam, Abhiram, Chan, Stanley H.

论文摘要

当训练诸如神经网络之类的估计器以进行图像DeNoising等任务时,通常优先训练一个估计器并将其应用于所有噪声水平。实现此目标的事实上的培训方案是用噪声样本训练估计器,其噪声水平均匀分布在整个兴趣范围内。但是,为什么我们要统一地分配样品?我们可以提供更多嘈杂的培训样本,而较少的样本更嘈杂?什么是最佳分布?我们如何获得这样的分布?本文的目的是从Minimax风险优化的角度解决此培训样本分配问题。我们得出双重上升算法,以确定只要闭合并凸出的一组可接受的估计器,就可以确保收敛性的最佳采样分布。对于具有不可允许的集合(例如深神经网络)的估计器,我们的双重配方会收敛于凸弛豫的解决方案。我们讨论如何在实践中实现算法。我们在线性估计器和深网上评估了算法。

When training an estimator such as a neural network for tasks like image denoising, it is often preferred to train one estimator and apply it to all noise levels. The de facto training protocol to achieve this goal is to train the estimator with noisy samples whose noise levels are uniformly distributed across the range of interest. However, why should we allocate the samples uniformly? Can we have more training samples that are less noisy, and fewer samples that are more noisy? What is the optimal distribution? How do we obtain such a distribution? The goal of this paper is to address this training sample distribution problem from a minimax risk optimization perspective. We derive a dual ascent algorithm to determine the optimal sampling distribution of which the convergence is guaranteed as long as the set of admissible estimators is closed and convex. For estimators with non-convex admissible sets such as deep neural networks, our dual formulation converges to a solution of the convex relaxation. We discuss how the algorithm can be implemented in practice. We evaluate the algorithm on linear estimators and deep networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源