论文标题

频闪:从激光雷达数据包中检测流对象检测

StrObe: Streaming Object Detection from LiDAR Packets

论文作者

Frossard, Davi, Suo, Simon, Casas, Sergio, Tu, James, Hu, Rui, Urtasun, Raquel

论文摘要

许多现代机器人系统由于其几何丰富性而采用LiDAR作为主要感应方式。滚动快门激光痛特别普遍,其中一系列激光器会从旋转的底座扫描场景。点发出作为数据包流,每个数据包都覆盖了360°覆盖范围的扇区。现代感知算法在处理数据之前等待构建完整的扫描,这引入了额外的延迟。对于典型的10Hz激光痛,这将是100ms。结果,到产生产出时,它不再准确地反映了世界的状态。这构成了一个挑战,因为机器人的应用需要最小的反应时间,以便在安全至关重要的情况下可以快速计划。在本文中,我们提出了一种新颖的方法,一种新颖的方法可以通过摄入激光雷达包并发出一系列检测来最大程度地减少延迟,而无需等待建造完整的扫描。 Strobe从以前的数据包中重新进行计算,并迭代地更新场景的潜在空间表示,该空间表示是记忆的,随着新证据的出现,可以作为内存,从而产生准确的低延迟感知。我们在大规模的现实世界数据集中证明了我们的方法的有效性,这表明,当考虑到延迟时,Strobe远远超过了最先进的功能,并且与传统环境中的性能相匹配。

Many modern robotics systems employ LiDAR as their main sensing modality due to its geometrical richness. Rolling shutter LiDARs are particularly common, in which an array of lasers scans the scene from a rotating base. Points are emitted as a stream of packets, each covering a sector of the 360° coverage. Modern perception algorithms wait for the full sweep to be built before processing the data, which introduces an additional latency. For typical 10Hz LiDARs this will be 100ms. As a consequence, by the time an output is produced, it no longer accurately reflects the state of the world. This poses a challenge, as robotics applications require minimal reaction times, such that maneuvers can be quickly planned in the event of a safety-critical situation. In this paper we propose StrObe, a novel approach that minimizes latency by ingesting LiDAR packets and emitting a stream of detections without waiting for the full sweep to be built. StrObe reuses computations from previous packets and iteratively updates a latent spatial representation of the scene, which acts as a memory, as new evidence comes in, resulting in accurate low-latency perception. We demonstrate the effectiveness of our approach on a large scale real-world dataset, showing that StrObe far outperforms the state-of-the-art when latency is taken into account, and matches the performance in the traditional setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源