论文标题

通过自我训练改善语义细分

Improving Semantic Segmentation via Self-Training

论文作者

Zhu, Yi, Zhang, Zhongyue, Wu, Chongruo, Zhang, Zhi, He, Tong, Zhang, Hang, Manmatha, R., Li, Mu, Smola, Alexander

论文摘要

深度学习通常可以通过完整的监督获得最佳结果。在语义分割的情况下,这意味着需要大量的Pixelwise注释来学习准确的模型。在本文中,我们表明我们可以使用半监督的方法,特别是自我训练范式来获得最先进的结果。我们首先在标记的数据上训练教师模型,然后在大量未标记的数据上生成伪标签。我们强大的培训框架可以共同消化人类通知和伪标签,并在CityScapes,Camvid和Kitti数据集上取得最佳性能,同时需要大大减少监督。我们还证明了自我训练对具有挑战性的跨域概括任务的有效性,从而超过了传统的登胜方法。最后,为了减轻大量伪标签造成的计算负担,我们提出了快速培训计划,以加速对分割模型的培训,最多可超过2倍,而不会降低性能。

Deep learning usually achieves the best results with complete supervision. In the case of semantic segmentation, this means that large amounts of pixelwise annotations are required to learn accurate models. In this paper, we show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm. We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data. Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets while requiring significantly less supervision. We also demonstrate the effectiveness of self-training on a challenging cross-domain generalization task, outperforming conventional finetuning method by a large margin. Lastly, to alleviate the computational burden caused by the large amount of pseudo labels, we propose a fast training schedule to accelerate the training of segmentation models by up to 2x without performance degradation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源