论文标题

WDA-NET:电子显微镜的弱监督域自适应分割

WDA-Net: Weakly-Supervised Domain Adaptive Segmentation of Electron Microscopy

论文作者

Qiu, Dafei, Yi, Jiajin, Peng, Jialin

论文摘要

细胞器实例的精确分割,例如线粒体,对于电子显微镜分析至关重要。尽管具有全面监督的方法的出色性能,但它们高度依赖于每个像素注释的数据,并且对域转移很敏感。为了通过竞争性能开发高度注释有效的方法,我们专注于弱监督的域适应性(WDA),使用一种极为稀疏和弱的注释,要求最小的注释工作,即仅在少量的对象实例的子集上进行稀疏点注释。为了减少域转移引起的性能降解,我们通过执行三个互补任务,即计数,检测和分割来探索多级可转移知识,从而构成具有不同域不变性的任务金字塔。这背后的直觉是,在研究了相关的源域之后,在目标域中发现类似对象要比划定其良好边界要容易得多。具体而言,我们将计数估计作为对检测的全球限制,并通过稀疏的监督进行了限制,这进一步指导了分割。引入了交叉位置切割和束缚,以进一步补偿注释稀疏性。广泛的验证表明,只有15%点注释的模型可以实现可比的性能作为监督模型,并显示出对注释选择的鲁棒性。

Accurate segmentation of organelle instances, e.g., mitochondria, is essential for electron microscopy analysis. Despite the outstanding performance of fully supervised methods, they highly rely on sufficient per-pixel annotated data and are sensitive to domain shift. Aiming to develop a highly annotation-efficient approach with competitive performance, we focus on weakly-supervised domain adaptation (WDA) with a type of extremely sparse and weak annotation demanding minimal annotation efforts, i.e., sparse point annotations on only a small subset of object instances. To reduce performance degradation arising from domain shift, we explore multi-level transferable knowledge through conducting three complementary tasks, i.e., counting, detection, and segmentation, constituting a task pyramid with different levels of domain invariance. The intuition behind this is that after investigating a related source domain, it is much easier to spot similar objects in the target domain than to delineate their fine boundaries. Specifically, we enforce counting estimation as a global constraint to the detection with sparse supervision, which further guides the segmentation. A cross-position cut-and-paste augmentation is introduced to further compensate for the annotation sparsity. Extensive validations show that our model with only 15% point annotations can achieve comparable performance as supervised models and shows robustness to annotation selection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源