论文标题
用于定向和密集的对象检测的动态精炼网络
Dynamic Refinement Network for Oriented and Densely Packed Object Detection
论文作者
论文摘要
在过去的十年中,对象检测取得了显着的进步。但是,由于固有的原因,对定向和密集的物体的检测仍然具有挑战性:(1)神经元的接收场都是与轴对齐的,形状相同,而物体通常具有多种形状并沿着各个方向保持一致; (2)通常对检测模型进行通用知识训练,并且可能在测试时不得推广以处理特定对象; (3)有限的数据集阻碍了此任务的开发。为了解决前两个问题,我们提出了一个动态改进网络,该网络由两个新颖的组件,即功能选择模块(FSM)和动态改进头(DRH)组成。我们的FSM使神经元能够根据目标对象的形状和方向调整接收场,而DRH则使我们的模型能够以对象感知方式动态地完善预测。为了解决相关基准的有限可用性,我们收集了一个广泛且完全注释的数据集,即SKU110K-R,该数据集由基于SKU110K的定向边界框重新标记。我们对包括DOTA,HRSC2016,SKU110K和我们自己的SKU110K-R数据集在内的几个公开基准进行定量评估。实验结果表明,与基线方法相比,我们的方法实现了一致和可观的增长。代码和数据集可从https://github.com/anymake/drn_cvpr2020获得。
Object detection has achieved remarkable progress in the past decade. However, the detection of oriented and densely packed objects remains challenging because of following inherent reasons: (1) receptive fields of neurons are all axis-aligned and of the same shape, whereas objects are usually of diverse shapes and align along various directions; (2) detection models are typically trained with generic knowledge and may not generalize well to handle specific objects at test time; (3) the limited dataset hinders the development on this task. To resolve the first two issues, we present a dynamic refinement network that consists of two novel components, i.e., a feature selection module (FSM) and a dynamic refinement head (DRH). Our FSM enables neurons to adjust receptive fields in accordance with the shapes and orientations of target objects, whereas the DRH empowers our model to refine the prediction dynamically in an object-aware manner. To address the limited availability of related benchmarks, we collect an extensive and fully annotated dataset, namely, SKU110K-R, which is relabeled with oriented bounding boxes based on SKU110K. We perform quantitative evaluations on several publicly available benchmarks including DOTA, HRSC2016, SKU110K, and our own SKU110K-R dataset. Experimental results show that our method achieves consistent and substantial gains compared with baseline approaches. The code and dataset are available at https://github.com/Anymake/DRN_CVPR2020.