论文标题

基于RGB的多层融合和推断深度的自我监督的自我运动估计

Self-Supervised Ego-Motion Estimation Based on Multi-Layer Fusion of RGB and Inferred Depth

论文作者

Jiang, Zijie, Taira, Hajime, Miyashita, Naoyuki, Okutomi, Masatoshi

论文摘要

在现有的自我监督深度和自我运动估计方法中,自我运动估计通常仅限于利用RGB信息。最近,已经提出了几种方法来进一步提高自我监督的自我运动估计的准确性,从而通过融合其他方式,例如深度,加速度和角速度。但是,他们很少关注不同的融合策略如何影响性能。在本文中,我们研究了不同的融合策略对自我运动估计的影响,并为自我监督的深度和自我运动估计提供了一个新的框架,该框架通过利用RGB并推断多层融合方式中的深度信息来执行自我运动估计。结果,我们在基于Kitti Odometry基准测试的基于学习的方法中实现了最先进的性能。还进行了有关利用推断深度信息和融合策略的设计选择的详细研究,这清楚地证明了我们提出的框架的优势。

In existing self-supervised depth and ego-motion estimation methods, ego-motion estimation is usually limited to only leveraging RGB information. Recently, several methods have been proposed to further improve the accuracy of self-supervised ego-motion estimation by fusing information from other modalities, e.g., depth, acceleration, and angular velocity. However, they rarely focus on how different fusion strategies affect performance. In this paper, we investigate the effect of different fusion strategies for ego-motion estimation and propose a new framework for self-supervised learning of depth and ego-motion estimation, which performs ego-motion estimation by leveraging RGB and inferred depth information in a Multi-Layer Fusion manner. As a result, we have achieved state-of-the-art performance among learning-based methods on the KITTI odometry benchmark. Detailed studies on the design choices of leveraging inferred depth information and fusion strategies have also been carried out, which clearly demonstrate the advantages of our proposed framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源