论文标题
迈向单眼3D对象检测的模型概括
Towards Model Generalization for Monocular 3D Object Detection
论文作者
论文摘要
单眼3D对象检测(MONO3D)随着新兴的大规模自动驾驶数据集和深度学习技术的快速发展,取得了巨大的改进。但是,由于严重的域间隙(例如,视野(FOV),像素大小和数据集中的对象大小)引起的,MONO3D检测器在泛化方面存在困难,从而导致在看不见的域上的性能急剧下降。为了解决这些问题,我们将位置不变的变换和多尺度训练与像素大小的深度策略相结合,以构建有效的统一摄像机将军(CGP)。它充分考虑了不同摄像机捕获的图像的FOV和像素大小的差异。此外,当通过详尽的系统研究交叉数据推理时,我们进一步研究了定量指标的障碍。我们认为预测的大小偏差会导致巨大的失败。因此,我们提出了2d-3d几何符合对象缩放策略(GCO),以通过实例级级增强来弥合差距。我们称为DGMONO3D的方法在所有评估的数据集上都能达到出色的性能,并且即使没有在目标域上使用数据,也超过了SOTA无监督域的适应方案。
Monocular 3D object detection (Mono3D) has achieved tremendous improvements with emerging large-scale autonomous driving datasets and the rapid development of deep learning techniques. However, caused by severe domain gaps (e.g., the field of view (FOV), pixel size, and object size among datasets), Mono3D detectors have difficulty in generalization, leading to drastic performance degradation on unseen domains. To solve these issues, we combine the position-invariant transform and multi-scale training with the pixel-size depth strategy to construct an effective unified camera-generalized paradigm (CGP). It fully considers discrepancies in the FOV and pixel size of images captured by different cameras. Moreover, we further investigate the obstacle in quantitative metrics when cross-dataset inference through an exhaustive systematic study. We discern that the size bias of prediction leads to a colossal failure. Hence, we propose the 2D-3D geometry-consistent object scaling strategy (GCOS) to bridge the gap via an instance-level augment. Our method called DGMono3D achieves remarkable performance on all evaluated datasets and surpasses the SoTA unsupervised domain adaptation scheme even without utilizing data on the target domain.