论文标题

深度合奏学习,用于分割胸部X光片的结核病一致的表现

Deep ensemble learning for segmenting tuberculosis-consistent manifestations in chest radiographs

论文作者

Rajaraman, Sivaramakrishnan, Yang, Feng, Zamzmi, Ghada, Guo, Peng, Xue, Zhiyun, Antani, Sameer K

论文摘要

使用深度学习(DL)方法的结核病(TB)自动分割(TB) - 一致的病变(CXR)可以帮助减少放射科医生的努力,补充临床决策,并有可能改善患者治疗。文献中的大多数作品都使用粗边界框注释讨论培训自动分割模型。但是,边界框注释的粒度可能导致在像素级别上包含相当一部分的假阳性和负面因素,从而可能对整体语义细分性能产生不利影响。这项研究(i)评估了使用TB一致性病变的细粒注释和(ii)U-NET模型变体的培训和构造的好处,以在原始和骨被抑制的额叶CXR中对TB持续病变进行语义分割。我们使用多种集合方法(例如位和位 - 或位于位 - 最大值和堆叠)评估了分割性能。我们观察到,与单个组成模型和其他集合方法相比,堆叠集合表现出优异的分割性能(骰子得分:0.5743,95%置信区间:(0.4055,0.7431))。据我们所知,这是第一个应用集合学习来提高细粒度元素一致性病变细分性能的研究。

Automated segmentation of tuberculosis (TB)-consistent lesions in chest X-rays (CXRs) using deep learning (DL) methods can help reduce radiologist effort, supplement clinical decision-making, and potentially result in improved patient treatment. The majority of works in the literature discuss training automatic segmentation models using coarse bounding box annotations. However, the granularity of the bounding box annotation could result in the inclusion of a considerable fraction of false positives and negatives at the pixel level that may adversely impact overall semantic segmentation performance. This study (i) evaluates the benefits of using fine-grained annotations of TB-consistent lesions and (ii) trains and constructs ensembles of the variants of U-Net models for semantically segmenting TB-consistent lesions in both original and bone-suppressed frontal CXRs. We evaluated segmentation performance using several ensemble methods such as bitwise AND, bitwise-OR, bitwise-MAX, and stacking. We observed that the stacking ensemble demonstrated superior segmentation performance (Dice score: 0.5743, 95% confidence interval: (0.4055,0.7431)) compared to the individual constituent models and other ensemble methods. To the best of our knowledge, this is the first study to apply ensemble learning to improve fine-grained TB-consistent lesion segmentation performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源