论文标题

COBNET:对物体和背景的互相注意,以进行几次分割

CobNet: Cross Attention on Object and Background for Few-Shot Segmentation

论文作者

Guan, Haoyan, Spratling, Michael

论文摘要

很少有射击分割旨在仅使用几个带注释的样本将包含以前看不见类的对象进行分割。大多数当前方法都集中在使用人类注释的帮助下,从支持图像中提取的对象信息来识别新查询图像中相同的对象。但是,背景信息也可以将对象与周围环境区分开。因此,一些以前的方法还从支持图像中提取背景信息。在本文中,我们认为此类信息的实用性有限,因为不同图像中的背景可能会差异很大。为了克服这个问题,我们提出了Cobnet,该科布纳特利用了有关从查询图像中提取的背景的信息,而无需注释这些图像。实验表明,对于Pascal-5i和Coco-20i,我们的方法的平均相交分别达到了61.4%和37.8%的平均相交得分,优于先前的方法。还显示,对于弱监督的少数片段分割,它还可以产生53.7%的最新性能,在该片段中没有为支持图像提供注释。

Few-shot segmentation aims to segment images containing objects from previously unseen classes using only a few annotated samples. Most current methods focus on using object information extracted, with the aid of human annotations, from support images to identify the same objects in new query images. However, background information can also be useful to distinguish objects from their surroundings. Hence, some previous methods also extract background information from the support images. In this paper, we argue that such information is of limited utility, as the background in different images can vary widely. To overcome this issue, we propose CobNet which utilises information about the background that is extracted from the query images without annotations of those images. Experiments show that our method achieves a mean Intersection-over-Union score of 61.4% and 37.8% for 1-shot segmentation on PASCAL-5i and COCO-20i respectively, outperforming previous methods. It is also shown to produce state-of-the-art performances of 53.7% for weakly-supervised few-shot segmentation, where no annotations are provided for the support images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源