论文标题

部分可观测时空混沌系统的无模型预测

V2XP-ASG: Generating Adversarial Scenes for Vehicle-to-Everything Perception

论文作者

Xiang, Hao, Xu, Runsheng, Xia, Xin, Zheng, Zhaoliang, Zhou, Bolei, Ma, Jiaqi

论文摘要

车辆到所有通信技术的最新进步使自动驾驶汽车能够共享感官信息以获得更好的感知性能。 With the rapid growth of autonomous vehicles and intelligent infrastructure, the V2X perception systems will soon be deployed at scale, which raises a safety-critical question: \textit{how can we evaluate and improve its performance under challenging traffic scenarios before the real-world deployment?} Collecting diverse large-scale real-world test scenes seems to be the most straightforward solution, but it is expensive and time-consuming, and the收藏只能涵盖有限的方案。为此,我们提出了第一个开放的对抗场景生成器V2XP-ASG,该发电机可以为现代基于激光雷达的多代理感知系统产生现实,具有挑战性的场景。 V2XP-ASG学会了构建对抗性协作图,并以对抗性和合理的方式同时扰动多个代理的姿势。该实验表明,V2XP-ASG可以有效地确定各种V2X感知系统的具有挑战性的场景。同时,通过对有限数量的挑战场景进行培训,V2X感知系统的准确性可以在具有挑战性的情况下进一步提高12.3 \%,而在正常场景中可以进一步提高4 \%。我们的代码将在https://github.com/xhwind/v2xp-asg上发布。

Recent advancements in Vehicle-to-Everything communication technology have enabled autonomous vehicles to share sensory information to obtain better perception performance. With the rapid growth of autonomous vehicles and intelligent infrastructure, the V2X perception systems will soon be deployed at scale, which raises a safety-critical question: \textit{how can we evaluate and improve its performance under challenging traffic scenarios before the real-world deployment?} Collecting diverse large-scale real-world test scenes seems to be the most straightforward solution, but it is expensive and time-consuming, and the collections can only cover limited scenarios. To this end, we propose the first open adversarial scene generator V2XP-ASG that can produce realistic, challenging scenes for modern LiDAR-based multi-agent perception systems. V2XP-ASG learns to construct an adversarial collaboration graph and simultaneously perturb multiple agents' poses in an adversarial and plausible manner. The experiments demonstrate that V2XP-ASG can effectively identify challenging scenes for a large range of V2X perception systems. Meanwhile, by training on the limited number of generated challenging scenes, the accuracy of V2X perception systems can be further improved by 12.3\% on challenging and 4\% on normal scenes. Our code will be released at https://github.com/XHwind/V2XP-ASG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源