论文标题
通过事件检测运动腔图像中的线段
Detecting Line Segments in Motion-blurred Images with Events
论文作者
论文摘要
在运动模糊下使线段探测器更可靠是实用应用的最重要挑战之一,例如视觉猛击和3D重建。现有的线段检测方法将面临严重的性能降解,以便在运动模糊时准确检测和定位线段。尽管事件数据显示出在高速分辨率下的最小模糊和边缘意识的图像的互补特征,但可能对可靠的线段识别有益。为了坚固地检测到线段模糊的线段,我们建议利用图像和事件的互补信息。为了实现这一目标,我们首先设计了一个通用框架的特征融合网络,以提取和融合详细的图像纹理和低延迟事件边缘,该网络由基于频道注意的浅融合模块和基于自我注意力的双向模块组成。然后,我们利用两个最先进的线框解析网络来检测融合功能图上的线段。此外,我们为线段检测(即Fe-wireframe和fe-blurframe)提供了一个合成和现实的数据集,并具有成对的运动腔图像和事件。在两个数据集上进行的广泛实验都证明了该方法的有效性。当在实际数据集上进行测试时,我们的方法可实现63.3%的平均结构平均精度(MSAP),该模型在Fe-wire-Frame上进行了预先训练,并在Fe-Blurframe上进行了微调,与仅在合成和实时培训的模型中相比,相比之下,将其提高了32.6和11.3点。代码,数据集和受过训练的模型在以下网址发布:https://levenberg.github.io/fe-lsd
Making line segment detectors more reliable under motion blurs is one of the most important challenges for practical applications, such as visual SLAM and 3D reconstruction. Existing line segment detection methods face severe performance degradation for accurately detecting and locating line segments when motion blur occurs. While event data shows strong complementary characteristics to images for minimal blur and edge awareness at high-temporal resolution, potentially beneficial for reliable line segment recognition. To robustly detect line segments over motion blurs, we propose to leverage the complementary information of images and events. To achieve this, we first design a general frame-event feature fusion network to extract and fuse the detailed image textures and low-latency event edges, which consists of a channel-attention-based shallow fusion module and a self-attention-based dual hourglass module. We then utilize two state-of-the-art wireframe parsing networks to detect line segments on the fused feature map. Besides, we contribute a synthetic and a realistic dataset for line segment detection, i.e., FE-Wireframe and FE-Blurframe, with pairwise motion-blurred images and events. Extensive experiments on both datasets demonstrate the effectiveness of the proposed method. When tested on the real dataset, our method achieves 63.3% mean structural average precision (msAP) with the model pre-trained on the FE-Wireframe and fine-tuned on the FE-Blurframe, improved by 32.6 and 11.3 points compared with models trained on synthetic only and real only, respectively. The codes, datasets, and trained models are released at: https://levenberg.github.io/FE-LSD