论文标题

在线对象检测中改善了运动模糊的处理

Improved Handling of Motion Blur in Online Object Detection

论文作者

Sayed, Mohamed, Brostow, Gabriel

论文摘要

对于将在现实世界中运行的在线视觉系统,我们希望检测特定的对象类别。对象检测已经非常具有挑战性。当图像模糊时,从汽车或手持手机中的相机模糊时,这甚至更难。大多数现有的努力要么集中在锋利的图像上,易于标记地面真相,要么将运动模糊视为许多通用腐败之一。 取而代之的是,我们特别关注自我诱发的模糊细节。我们探索五类补救措施,其中每个补救措施都针对尖锐图像和模糊图像之间的性能差距的不同潜在原因。例如,首先消除图像改变了其人类的解释性,但目前只能部分改善对象检测。其他四个类别的补救措施介绍了多尺度纹理,分布测试,标签生成和模糊类型的调理。令人惊讶的是,我们发现旨在解决空间歧义的自定义标签生成在其他所有方面都显着改善了对象检测。同样,与分类的发现相反,我们通过在定制运动模糊类别上调节模型来看到值得注意的提升。 我们对模糊的可可图像和现实世界模糊数据集进行了实验性验证并交叉培训不同的补救措施,从而产生了具有出色检测率的简单且实用的最爱模型。

We wish to detect specific categories of objects, for online vision systems that will run in the real world. Object detection is already very challenging. It is even harder when the images are blurred, from the camera being in a car or a hand-held phone. Most existing efforts either focused on sharp images, with easy to label ground truth, or they have treated motion blur as one of many generic corruptions. Instead, we focus especially on the details of egomotion induced blur. We explore five classes of remedies, where each targets different potential causes for the performance gap between sharp and blurred images. For example, first deblurring an image changes its human interpretability, but at present, only partly improves object detection. The other four classes of remedies address multi-scale texture, out-of-distribution testing, label generation, and conditioning by blur-type. Surprisingly, we discover that custom label generation aimed at resolving spatial ambiguity, ahead of all others, markedly improves object detection. Also, in contrast to findings from classification, we see a noteworthy boost by conditioning our model on bespoke categories of motion blur. We validate and cross-breed the different remedies experimentally on blurred COCO images and real-world blur datasets, producing an easy and practical favorite model with superior detection rates.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源