论文标题
CNN的方法,用于同时计算植物并从无人机图像中检测种植园
A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows from UAV Imagery
论文作者
论文摘要
在本文中,我们提出了一种基于卷积神经网络(CNN)的新型深度学习方法,该方法同时检测和地理位置了种植术,同时考虑了其植物,考虑了高度密度的人工林构型。在具有不同生长阶段和柑橘园的玉米田中评估了实验设置。两个数据集都表征了不同的植物密度场景,位置,农作物,传感器和日期的类型。在我们的CNN方法中实现了两支分支的架构,在该方法中,在种植园中获得的信息被更新到工厂检测分支中,并恢复为行分支;然后通过多阶段改进方法来完善。在玉米种植园数据集(具有生长阶段,年轻和成熟的阶段)中,我们的方法返回了每个图像贴片6.224植物的平均绝对误差(MAE),平均相对误差(MRE)为0.1038,精度和回忆值分别为0.856和0.905,以及等于0.876的F-量。这些结果优于其他深层网络(HRNET,更快的R-CNN和Retinanet)的结果,该网络使用相同的任务和数据集进行了评估。对于种植园的检测,我们的方法返回精度,召回率和F量评分分别为0.913、0.941和0.925。为了通过不同类型的农业测试模型的鲁棒性,我们在柑橘园数据集中执行了相同的任务。它返回了每个贴剂等于1.409的柑橘树,MRE的MAE,0.0615,精度为0.922,召回0.911,F量度为0.965。对于柑橘种植园的检测,我们的方法分别导致精度,召回和F量评分分别等于0.965、0.970和0.964。所提出的方法实现了从不同类型的农作物中的无人机图像中计数和地理位置植物和植物排的最新性能。
In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in a cornfield with different growth stages and in a Citrus orchard. Both datasets characterize different plant density scenarios, locations, types of crops, sensors, and dates. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases, young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.