论文标题

八度:2d En脸部光学相干断层扫描血管造影血管造影血管在弱监督的学习中进行了局部增强

OCTAve: 2D en face Optical Coherence Tomography Angiography Vessel Segmentation in Weakly-Supervised Learning with Locality Augmentation

论文作者

Chinkamol, Amrest, Kanjaras, Vetit, Sawangjai, Phattarapong, Zhao, Yitian, Sudhawiyangkul, Thapanun, Chantrapornchai, Chantana, Guan, Cuntai, Wilaiprasitporn, Theerawit

论文摘要

尽管使用深度学习技术从2d Ena face octa中提取血管结构的研究越来越多,但对于这种方法,众所周知,曲线式结构(如视网膜脉管系统)上的数据注释过程非常昂贵且耗时,尽管很少,但很少有人试图解决注释问题。 在这项工作中,我们提出了涂鸦基本弱监督学习方法的应用来自动化像素级注释。所提出的方法称为八度,结合了使用涂鸦的地面真理与对抗性和新颖的自欺欺人的深层监督相结合的。我们的新型机制旨在利用从类似于Unet的结构的歧视层中的判别输出,在训练过程中,骨料判别输出和分割图谓词之间的kullback-liebler差异在训练过程中被最小化。如我们的实验所示,这种结合方法导致血管结构的定位更好。我们在大型公共数据集上验证了我们提出的方法,即Rose,八月-500。将分割性能与最先进的完全监督和基于涂鸦的弱监督方法进行了比较。实验中使用的工作的实施位于[链接]。

While there have been increased researches using deep learning techniques for the extraction of vascular structure from the 2D en face OCTA, for such approach, it is known that the data annotation process on the curvilinear structure like the retinal vasculature is very costly and time consuming, albeit few tried to address the annotation problem. In this work, we propose the application of the scribble-base weakly-supervised learning method to automate the pixel-level annotation. The proposed method, called OCTAve, combines the weakly-supervised learning using scribble-annotated ground truth augmented with an adversarial and a novel self-supervised deep supervision. Our novel mechanism is designed to utilize the discriminative outputs from the discrimination layer of a UNet-like architecture where the Kullback-Liebler Divergence between the aggregate discriminative outputs and the segmentation map predicate is minimized during the training. This combined method leads to the better localization of the vascular structure as shown in our experiments. We validate our proposed method on the large public datasets i.e., ROSE, OCTA-500. The segmentation performance is compared against both state-of-the-art fully-supervised and scribble-based weakly-supervised approaches. The implementation of our work used in the experiments is located at [LINK].

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源