论文标题
原型细分网络的原型细分网络
Prototype Refinement Network for Few-Shot Segmentation
论文作者
论文摘要
很少有分割目标可用于细分新类,并提供了几个带注释的图像。它比传统的语义细分任务更具挑战性,该任务将具有丰富带注释的图像的已知类别分割。在本文中,我们提出了一个原型改进网络(PRNET),以攻击少量分割的挑战。它首先学会从已知类别的支持和查询图像中提取原型。此外,为了提取新类的代表性原型,我们将适应性和融合进行原型细化。适应的步骤使模型学习了新概念,该概念是通过再培训直接实现的。首先提出了原型融合的原型融合,该融合融合了与查询原型的支持原型,并结合了双方的知识。它在原型细化中有效,而无需导入额外的可学习参数。这样,在低数据制度中,原型变得更加歧视。 Pasal- $ 5^i $和Coco- $ 20^i $的实验证明了我们方法的优势。尤其是在可可$ 20^i $上,PRNET在1次设置中的大幅度优于13.1 \%的现有方法。
Few-shot segmentation targets to segment new classes with few annotated images provided. It is more challenging than traditional semantic segmentation tasks that segment known classes with abundant annotated images. In this paper, we propose a Prototype Refinement Network (PRNet) to attack the challenge of few-shot segmentation. It firstly learns to bidirectionally extract prototypes from both support and query images of the known classes. Furthermore, to extract representative prototypes of the new classes, we use adaptation and fusion for prototype refinement. The step of adaptation makes the model to learn new concepts which is directly implemented by retraining. Prototype fusion is firstly proposed which fuses support prototypes with query prototypes, incorporating the knowledge from both sides. It is effective in prototype refinement without importing extra learnable parameters. In this way, the prototypes become more discriminative in low-data regimes. Experiments on PASAL-$5^i$ and COCO-$20^i$ demonstrate the superiority of our method. Especially on COCO-$20^i$, PRNet significantly outperforms existing methods by a large margin of 13.1\% in 1-shot setting.