论文标题

动态环境中的上下文感知流媒体感知

Context-Aware Streaming Perception in Dynamic Environments

论文作者

Sela, Gur-Eyal, Gog, Ionel, Wong, Justin, Agrawal, Kumar Krishna, Mo, Xiangxi, Kalra, Sukrit, Schafhalter, Peter, Leong, Eric, Wang, Xin, Balaji, Bharathan, Gonzalez, Joseph, Stoica, Ion

论文摘要

有效的视力在潜伏期预算下最大程度地提高了准确性。这些作品一次评估脱机准确性,一次是一张图像。但是,诸如自动驾驶的实时视觉应用程序在流媒体设置中运行,在该设置中,地面真相在推理开始和终点之间发生了变化。这会导致精度下降。因此,最近提出的一项旨在最大程度地提高流媒体设置准确性的工作。在本文中,我们建议在每个环境环境中最大程度地提高流的准确性。我们认为,场景难度会影响初始(离线)精度差异,而场景中的障碍物位移会影响随后的准确性降解。我们的方法章鱼使用这些方案属性来选择在测试时最大化流量准确性的配置。我们的方法将跟踪性能(S-MOTA)提高了7.4%,而常规静态方法则提高了。此外,使用我们的方法提高绩效,而不是离线准确性的进步,而不是代替而不是进步。

Efficient vision works maximize accuracy under a latency budget. These works evaluate accuracy offline, one image at a time. However, real-time vision applications like autonomous driving operate in streaming settings, where ground truth changes between inference start and finish. This results in a significant accuracy drop. Therefore, a recent work proposed to maximize accuracy in streaming settings on average. In this paper, we propose to maximize streaming accuracy for every environment context. We posit that scenario difficulty influences the initial (offline) accuracy difference, while obstacle displacement in the scene affects the subsequent accuracy degradation. Our method, Octopus, uses these scenario properties to select configurations that maximize streaming accuracy at test time. Our method improves tracking performance (S-MOTA) by 7.4% over the conventional static approach. Further, performance improvement using our method comes in addition to, and not instead of, advances in offline accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源