论文标题

学习伪前深度的2D前瞻性声纳多视图立体声

Learning Pseudo Front Depth for 2D Forward-Looking Sonar-based Multi-view Stereo

论文作者

Wang, Yusheng, Ji, Yonghoon, Tsuchiya, Hiroshi, Asama, Hajime, Yamashita, Atsushi

论文摘要

从2D前看声纳中检索声学图像中缺少的维度信息是水下机器人技术领域的一个众所周知的问题。尝试从单个图像中检索3D信息的作品,该图像允许机器人通过飞行运动生成3D地图。但是,由于独特的图像配方原理,估计来自单个图像的3D信息面临严重的歧义问题。多视图立体声的经典方法可以避免歧义问题,但可能需要大量观点才能生成准确的模型。在这项工作中,我们提出了一种基于学习的新型多视角立体方法来估计3D信息。为了更好地利用来自多个框架的信息,提出了一种高程平面扫平方法来生成深度 - 齐达式的成本量。正则化后的体积可以视为目标的概率体积表示。我们使用伪前深度来代表3D信息,而不是在高程角度上进行回归,而是可以避免声学成像中的2d-3d问题。只有两个或三个图像就可以生成高准确的结果。生成合成数据集以模拟各种水下目标。我们还在大型水箱中构建了第一个具有准确地面真相的真实数据集。与其他最先进的方法相比,实验结果证明了我们方法的优越性。

Retrieving the missing dimension information in acoustic images from 2D forward-looking sonar is a well-known problem in the field of underwater robotics. There are works attempting to retrieve 3D information from a single image which allows the robot to generate 3D maps with fly-through motion. However, owing to the unique image formulation principle, estimating 3D information from a single image faces severe ambiguity problems. Classical methods of multi-view stereo can avoid the ambiguity problems, but may require a large number of viewpoints to generate an accurate model. In this work, we propose a novel learning-based multi-view stereo method to estimate 3D information. To better utilize the information from multiple frames, an elevation plane sweeping method is proposed to generate the depth-azimuth-elevation cost volume. The volume after regularization can be considered as a probabilistic volumetric representation of the target. Instead of performing regression on the elevation angles, we use pseudo front depth from the cost volume to represent the 3D information which can avoid the 2D-3D problem in acoustic imaging. High-accuracy results can be generated with only two or three images. Synthetic datasets were generated to simulate various underwater targets. We also built the first real dataset with accurate ground truth in a large scale water tank. Experimental results demonstrate the superiority of our method, compared to other state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源