论文标题
一致的视频深度估计
Consistent Video Depth Estimation
论文作者
论文摘要
我们提出了一种用于重建单眼视频中所有像素的密集,几何一致的深度的算法。我们利用传统的结构重建结构来建立视频中像素的几何约束。与经典重建方面的临时先验不同,我们使用基于学习的先验,即接受单位深度估计的卷积神经网络。在测试时,我们微调了该网络,以满足特定输入视频的几何约束,同时保留其合成在视频中较少约束的部分中合成合理深度细节的能力。我们通过定量验证表明,与以前的单眼重建方法相比,我们的方法达到了更高的准确性和更高的几何一致性。从视觉上讲,我们的结果似乎更稳定。我们的算法能够以中等程度的动态运动来处理具有挑战性的手持捕获的输入视频。重建的提高质量可以实现多种应用程序,例如场景重建和基于高级视频的视觉效果。
We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video. We leverage a conventional structure-from-motion reconstruction to establish geometric constraints on pixels in the video. Unlike the ad-hoc priors in classical reconstruction, we use a learning-based prior, i.e., a convolutional neural network trained for single-image depth estimation. At test time, we fine-tune this network to satisfy the geometric constraints of a particular input video, while retaining its ability to synthesize plausible depth details in parts of the video that are less constrained. We show through quantitative validation that our method achieves higher accuracy and a higher degree of geometric consistency than previous monocular reconstruction methods. Visually, our results appear more stable. Our algorithm is able to handle challenging hand-held captured input videos with a moderate degree of dynamic motion. The improved quality of the reconstruction enables several applications, such as scene reconstruction and advanced video-based visual effects.