论文标题

朝向域形深度完成

Towards Domain-agnostic Depth Completion

论文作者

Xu, Guangkai, Yin, Wei, Zhang, Jianming, Wang, Oliver, Niklaus, Simon, Chen, Simon, Bian, Jia-Wang

论文摘要

现有的深度完成方法通常以特定的稀疏深度类型为目标,并且在任务域之间概括较差。我们提出了一种通过各种范围传感器(包括现代手机中的范围传感器)或通过多视图重建算法获得的各种范围传感器获得的稀疏/半密度,嘈杂和潜在的低分辨率深度图的方法。我们的方法以在大规模数据集上训练的单个图像深度预测网络的形式利用了数据驱动的先验,其输出被用作我们模型的输入。我们提出了一个有效的培训计划,我们在典型的任务域中模拟各种稀疏模式。此外,我们设计了两个新的基准测试,以评估深度完成方法的普遍性和鲁棒性。我们的简单方法显示了针对最先进的深度完成方法的优越的跨域泛化能力,从而引入了一种实用的解决方案,以在移动设备上捕获高质量的深度捕获。该代码可在以下网址获得:https://github.com/yvanyin/filldepth。

Existing depth completion methods are often targeted at a specific sparse depth type and generalize poorly across task domains. We present a method to complete sparse/semi-dense, noisy, and potentially low-resolution depth maps obtained by various range sensors, including those in modern mobile phones, or by multi-view reconstruction algorithms. Our method leverages a data-driven prior in the form of a single image depth prediction network trained on large-scale datasets, the output of which is used as an input to our model. We propose an effective training scheme where we simulate various sparsity patterns in typical task domains. In addition, we design two new benchmarks to evaluate the generalizability and the robustness of depth completion methods. Our simple method shows superior cross-domain generalization ability against state-of-the-art depth completion methods, introducing a practical solution to high-quality depth capture on a mobile device. The code is available at: https://github.com/YvanYin/FillDepth.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源