论文标题

受过监督深度学习的训练模型是有条件的风险最小化器

Trained Model in Supervised Deep Learning is a Conditional Risk Minimizer

论文作者

Xie, Yutong, Wu, Dufan, Dong, Bin, Li, Quanzheng

论文摘要

我们证明了一个受过监督深度学习的训练有素的模型可以最大程度地减少每个输入的条件风险(定理2.1)。该属性提供了对训练有素模型的行为的见解,并在某些情况下建立了监督和无监督学习之间的联系。此外,当标签棘手但可以作为有条件的风险最小化器写入时,我们证明了与可访问标签的原始监督学习问题的等效形式(定理2.2)。我们证明了许多现有的作品,例如Noise2Score,Noise2Noise和得分函数估计,我们的定理可以解释。此外,我们使用定理2.1带有嘈杂标签的分类问题的属性,并使用MNIST数据集验证了它。此外,我们提出了一种基于定理2.2的图像超分辨率不确定性的方法,并使用Imagenet数据集对其进行了验证。我们的代码可在GitHub上找到。

We proved that a trained model in supervised deep learning minimizes the conditional risk for each input (Theorem 2.1). This property provided insights into the behavior of trained models and established a connection between supervised and unsupervised learning in some cases. In addition, when the labels are intractable but can be written as a conditional risk minimizer, we proved an equivalent form of the original supervised learning problem with accessible labels (Theorem 2.2). We demonstrated that many existing works, such as Noise2Score, Noise2Noise and score function estimation can be explained by our theorem. Moreover, we derived a property of classification problem with noisy labels using Theorem 2.1 and validated it using MNIST dataset. Furthermore, We proposed a method to estimate uncertainty in image super-resolution based on Theorem 2.2 and validated it using ImageNet dataset. Our code is available on github.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源