论文标题
半监督领域适应的矛盾结构学习
Contradictory Structure Learning for Semi-supervised Domain Adaptation
论文作者
论文摘要
当前的对抗适应方法试图使跨域特征对齐,而两个挑战尚未解决:1)条件分布不匹配和2)决策边界对源域的偏差。为了解决这些挑战,我们通过统一相反结构(UODA)的学习,提出了一个新颖的半监督域适应性框架。 UODA由一个生成器和两个分类器(即,源碎片分类器和目标群集分类器)组成,这些分类器和目标群集分类器是为了矛盾的。目标群集分类器试图聚集目标特征以提高类内密度并扩大课堂间差异。同时,源冰期分类器旨在散布源特征,以增强决策边界的平滑度。通过源 - 功能扩展和目标特征聚类程序的交替,目标特征在相应源特征的扩张边界内得到很好的锁定。该策略可以使跨域特征同时与源偏置保持一致。此外,为了通过训练克服模型崩溃,我们通过对抗性训练范式逐步更新了特征距离及其表示的测量。在域网和办公室数据集的基准上进行了广泛的实验,证明了我们的方法优于最先进的方法。
Current adversarial adaptation methods attempt to align the cross-domain features, whereas two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain. To solve these challenges, we propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA). UODA consists of a generator and two classifiers (i.e., the source-scattering classifier and the target-clustering classifier), which are trained for contradictory purposes. The target-clustering classifier attempts to cluster the target features to improve intra-class density and enlarge inter-class divergence. Meanwhile, the source-scattering classifier is designed to scatter the source features to enhance the decision boundary's smoothness. Through the alternation of source-feature expansion and target-feature clustering procedures, the target features are well-enclosed within the dilated boundary of the corresponding source features. This strategy can make the cross-domain features to be precisely aligned against the source bias simultaneously. Moreover, to overcome the model collapse through training, we progressively update the measurement of feature's distance and their representation via an adversarial training paradigm. Extensive experiments on the benchmarks of DomainNet and Office-home datasets demonstrate the superiority of our approach over the state-of-the-art methods.