论文标题
跨域自我监督学习,以适应域的适应性很少的源标签
Cross-domain Self-supervised Learning for Domain Adaptation with Few Source Labels
论文作者
论文摘要
现有的无监督域适应方法旨在将知识从富含标签的源域转移到未标记的目标域。但是,获取某些源域的标签可能非常昂贵,因此在先前的工作中使用的完整标签是不切实际的。在这项工作中,我们使用稀疏标记的源数据研究了一种新的域适应方案,其中仅标记了源域中的几个示例,而目标域则未标记。我们表明,当标记的源示例有限时,现有方法通常无法学习适用于源域和目标域的判别特征。我们提出了一种新型的跨域自学(CD)学习方法,以了解域适应性,该方法不仅学习了域不变,而且是阶级歧视性的特征。我们的自我监督学习方法以域的自适应方式捕获了与内域自学的明显视觉相似性,并执行跨域特征与跨域自学的特征。在具有三个标准基准数据集的广泛实验中,我们的方法显着提高了新目标域中目标准确性的性能,而源标签很少,甚至对经典域的适应情景有帮助。
Existing unsupervised domain adaptation methods aim to transfer knowledge from a label-rich source domain to an unlabeled target domain. However, obtaining labels for some source domains may be very expensive, making complete labeling as used in prior work impractical. In this work, we investigate a new domain adaptation scenario with sparsely labeled source data, where only a few examples in the source domain have been labeled, while the target domain is unlabeled. We show that when labeled source examples are limited, existing methods often fail to learn discriminative features applicable for both source and target domains. We propose a novel Cross-Domain Self-supervised (CDS) learning approach for domain adaptation, which learns features that are not only domain-invariant but also class-discriminative. Our self-supervised learning method captures apparent visual similarity with in-domain self-supervision in a domain adaptive manner and performs cross-domain feature matching with across-domain self-supervision. In extensive experiments with three standard benchmark datasets, our method significantly boosts performance of target accuracy in the new target domain with few source labels and is even helpful on classical domain adaptation scenarios.