论文标题
重新思考差异:基于差异的无深度范围的多视图立体声
Rethinking Disparity: A Depth Range Free Multi-View Stereo Based on Disparity
论文作者
论文摘要
现有的基于学习的多视图立体声(MVS)方法依赖于深度范围来构建3D成本量,并且当范围太大或不可靠时可能会失败。为了解决这个问题,我们提出了一种基于差异的MVS方法,该方法基于表现差异流(E-Flow),称为DESPMVS,该方法从两个视图之间的像素运动中渗透了深度信息。 DispMV的核心是在图像平面上沿每对(参考图像和几个源图像之间)的图像平面上构造一个2D成本体积,以通过多视图几何形状从每个对构造的像素匹配和融合无数的深度三角形,以确保多视图一致性。为了稳健,dispMVS从随机初始化的深度图开始,并借助粗到精细的策略迭代地完善了深度图。 DTUMV和坦克\&Temple数据集的实验表明,dispMVS对深度范围不敏感,并且具有较低的GPU内存的最新结果。
Existing learning-based multi-view stereo (MVS) methods rely on the depth range to build the 3D cost volume and may fail when the range is too large or unreliable. To address this problem, we propose a disparity-based MVS method based on the epipolar disparity flow (E-flow), called DispMVS, which infers the depth information from the pixel movement between two views. The core of DispMVS is to construct a 2D cost volume on the image plane along the epipolar line between each pair (between the reference image and several source images) for pixel matching and fuse uncountable depths triangulated from each pair by multi-view geometry to ensure multi-view consistency. To be robust, DispMVS starts from a randomly initialized depth map and iteratively refines the depth map with the help of the coarse-to-fine strategy. Experiments on DTUMVS and Tanks\&Temple datasets show that DispMVS is not sensitive to the depth range and achieves state-of-the-art results with lower GPU memory.