论文标题
蒙面的gan无监督的深度和姿势预测,以尺度一致性
Masked GANs for Unsupervised Depth and Pose Prediction with Scale Consistency
论文作者
论文摘要
先前的工作表明,对抗性学习可用于无监督的单眼深度和视觉探光(VO)估计,其中对抗性损失和几何图像重建损失被用作主要的监督信号,以训练整个无人监督的框架。但是,对抗框架和图像重建的性能通常受遮挡的限制,并且框架之间的视野变化。本文提出了一个掩盖的生成对抗网络(GAN),以实施无监督的单眼深度和自我运动估计。在此框架中设计了MaskNet和Boolean面膜方案,以消除视野损失和对抗性损失的视野变化的影响,以消除视野变化的影响。此外,我们还通过利用新的比例一致性损失来考虑姿势网络的规模一致性,因此,我们的姿势网络能够通过长的单眼序列提供完整的摄像头轨迹。 KITTI数据集上的广泛实验表明,本文提出的每个组件都有助于性能,并且我们的深度和轨迹预测都可以在KITTI和MAKE3D数据集上实现竞争性能。
Previous work has shown that adversarial learning can be used for unsupervised monocular depth and visual odometry (VO) estimation, in which the adversarial loss and the geometric image reconstruction loss are utilized as the mainly supervisory signals to train the whole unsupervised framework. However, the performance of the adversarial framework and image reconstruction is usually limited by occlusions and the visual field changes between frames. This paper proposes a masked generative adversarial network (GAN) for unsupervised monocular depth and ego-motion estimation.The MaskNet and Boolean mask scheme are designed in this framework to eliminate the effects of occlusions and impacts of visual field changes on the reconstruction loss and adversarial loss, respectively. Furthermore, we also consider the scale consistency of our pose network by utilizing a new scale-consistency loss, and therefore, our pose network is capable of providing the full camera trajectory over a long monocular sequence. Extensive experiments on the KITTI dataset show that each component proposed in this paper contributes to the performance, and both our depth and trajectory predictions achieve competitive performance on the KITTI and Make3D datasets.