论文标题
“灯柱旁边的行人”自适应对象图表,以更好的瞬时映射
"The Pedestrian next to the Lamppost" Adaptive Object Graphs for Better Instantaneous Mapping
论文作者
论文摘要
估计单个图像中的语义分割的鸟类视图(BEV)图已成为自主控制和导航的流行技术。但是,它们显示出与相机距离的距离增加的本地化错误。尽管这种错误的增加是完全预期的 - 距离的本地化更难 - 大部分性能下降可以归因于当前基于纹理的模型所使用的提示,特别是,它们大量使用了对象 - 地面交叉点(例如阴影),这些交叉点变得越来越稀疏和不确定。在这项工作中,我们通过学习场景中对象之间的空间关系来解决BEV映射中的这些缺陷。我们提出了一个图形神经网络,该网络通过在其他对象的上下文中对对象进行空间推理,从而从单眼图像中预测BEV对象。我们的方法为BEV估计中的新最新估计设定了三个大规模数据集的单眼图像的新最新估计,其中包括Nuscenes上对象的相对改进50%。
Estimating a semantically segmented bird's-eye-view (BEV) map from a single image has become a popular technique for autonomous control and navigation. However, they show an increase in localization error with distance from the camera. While such an increase in error is entirely expected - localization is harder at distance - much of the drop in performance can be attributed to the cues used by current texture-based models, in particular, they make heavy use of object-ground intersections (such as shadows), which become increasingly sparse and uncertain for distant objects. In this work, we address these shortcomings in BEV-mapping by learning the spatial relationship between objects in a scene. We propose a graph neural network which predicts BEV objects from a monocular image by spatially reasoning about an object within the context of other objects. Our approach sets a new state-of-the-art in BEV estimation from monocular images across three large-scale datasets, including a 50% relative improvement for objects on nuScenes.