论文标题
连续域适应的梯度正规化对比度学习
Gradient Regularized Contrastive Learning for Continual Domain Adaptation
论文作者
论文摘要
人类可以通过利用学习经验来迅速适应环境变化。但是,适应动态环境的能力不佳仍然是AI模型的主要挑战。为了更好地理解此问题,我们研究了连续域适应的问题,其中该模型被标记为源域和一系列未标记的目标域。这个问题有两个主要的障碍:域的转移和灾难性的遗忘。在这项工作中,我们提出了梯度正规化的对比度学习,以解决上述障碍。在我们方法的核心方面,梯度正则化扮演两个关键角色:(1)强制执行对比损失的梯度,以不增加源域上的监督培训损失,这维持了学识渊博的特征的歧视性; (2)将新域上的梯度更新正规化,以不增加旧目标域的分类损失,这使该模型能够适应室内目标域,同时保留先前观察到的域的性能。因此,我们的方法可以共同学习具有标记的源域和未标记的目标域的语义歧视性和域不变特征。与最先进的画面相比,数字,域内网和 - 校准基准测试的实验证明了我们方法的出色表现。
Human beings can quickly adapt to environmental changes by leveraging learning experience. However, the poor ability of adapting to dynamic environments remains a major challenge for AI models. To better understand this issue, we study the problem of continual domain adaptation, where the model is presented with a labeled source domain and a sequence of unlabeled target domains. There are two major obstacles in this problem: domain shifts and catastrophic forgetting. In this work, we propose Gradient Regularized Contrastive Learning to solve the above obstacles. At the core of our method, gradient regularization plays two key roles: (1) enforces the gradient of contrastive loss not to increase the supervised training loss on the source domain, which maintains the discriminative power of learned features; (2) regularizes the gradient update on the new domain not to increase the classification loss on the old target domains, which enables the model to adapt to an in-coming target domain while preserving the performance of previously observed domains. Hence our method can jointly learn both semantically discriminative and domain-invariant features with labeled source domain and unlabeled target domains. The experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach when compared to the state-of-the-art.