论文标题
跨摄像机卷积颜色恒定
Cross-Camera Convolutional Color Constancy
论文作者
论文摘要
我们提出了一种基于学习的方法的“跨相机卷积色彩恒定”(C5),对来自多个相机的图像进行培训,可以准确估算场景的照明颜色,这些颜色是从训练期间以前看不见的新相机捕获的原始摄像机捕获的原始图像。 C5是卷积颜色恒定恒定(CCC)方法的超网状延伸:C5学会生成CCC模型的权重,然后在输入图像上评估,CCC权重动态适应了不同的输入内容。与以前的跨摄像机色彩恒定模型不同,通常被设计为对未观察到的相机测试集图像的光谱特性的不可知论,C5通过传输推理的镜头来解决此问题:其他未标记的图像作为测试时为模型提供了其他未标记的图像,从而使模型可以在测试中校准相机在测试中的光谱相机校准。 C5在几个数据集上的跨摄像头颜色恒定恒定的最先进的准确性可以迅速评估(分别在GPU或CPU上的每个图像〜7和〜90毫秒),并且很少的内存(〜2 MB)(〜2 MB),因此是用于校准无自动校准的自动白平平衡照片的实际解决方案。
We present "Cross-Camera Convolutional Color Constancy" (C5), a learning-based method, trained on images from multiple cameras, that accurately estimates a scene's illuminant color from raw images captured by a new camera previously unseen during training. C5 is a hypernetwork-like extension of the convolutional color constancy (CCC) approach: C5 learns to generate the weights of a CCC model that is then evaluated on the input image, with the CCC weights dynamically adapted to different input content. Unlike prior cross-camera color constancy models, which are usually designed to be agnostic to the spectral properties of test-set images from unobserved cameras, C5 approaches this problem through the lens of transductive inference: additional unlabeled images are provided as input to the model at test time, which allows the model to calibrate itself to the spectral properties of the test-set camera during inference. C5 achieves state-of-the-art accuracy for cross-camera color constancy on several datasets, is fast to evaluate (~7 and ~90 ms per image on a GPU or CPU, respectively), and requires little memory (~2 MB), and thus is a practical solution to the problem of calibration-free automatic white balance for mobile photography.