论文标题

学习面向任务的解开表示无监督的域名适应性

Learning Task-oriented Disentangled Representations for Unsupervised Domain Adaptation

论文作者

Dai, Pingyang, Chen, Peixian, Wu, Qiong, Hong, Xiaopeng, Ye, Qixiang, Tian, Qi, Ji, Rongrong

论文摘要

无监督的域适应性(UDA)旨在解决标记的源域和未标记的目标域之间的域转移问题。已经做出了许多努力来解决培训数据和测试数据之间的不匹配,但不幸的是,它们忽略了跨域中的面向任务的信息,并且在复杂的开放式场景中表现良好。已经做出了许多努力,以消除通过学习域不变表示的培训和测试数据分布之间的不匹配。但是,学习的表示通常不是以任务为导向的,即同时可以同时进行阶级歧视和域转移。该缺点限制了UDA在复杂的开放式任务中的灵活性,在这些任务中,域之间没有标签。在本文中,我们将任务 - 方向的概念分解为任务 - 权利和任务 - 求官,并提出了一个动态的面向任务的分离网络(DTDN),以以UDA的端到端方式学习分离的表示表示。动态解开网络有效地将数据表示形式分为两个组成部分:与任务相关的组件嵌入与跨域的任务相关的关键信息,以及与剩余的不可转让或不令人不安的信息相关的任务IRRELERRELERRELERRELERRELERRELERRELERRELERRELERRELEREL。这两个组件由跨域的一组特定于任务的目标函数正规化。这种正则化明确鼓励解开并避免使用生成模型或解码器。复杂的开放式场景(检索任务)和经验基准(分类任务)进行的实验表明,所提出的方法捕​​获了丰富的散布信息并实现了卓越的性能。

Unsupervised domain adaptation (UDA) aims to address the domain-shift problem between a labeled source domain and an unlabeled target domain. Many efforts have been made to address the mismatch between the distributions of training and testing data, but unfortunately, they ignore the task-oriented information across domains and are inflexible to perform well in complicated open-set scenarios. Many efforts have been made to eliminate the mismatch between the distributions of training and testing data by learning domain-invariant representations. However, the learned representations are usually not task-oriented, i.e., being class-discriminative and domain-transferable simultaneously. This drawback limits the flexibility of UDA in complicated open-set tasks where no labels are shared between domains. In this paper, we break the concept of task-orientation into task-relevance and task-irrelevance, and propose a dynamic task-oriented disentangling network (DTDN) to learn disentangled representations in an end-to-end fashion for UDA. The dynamic disentangling network effectively disentangles data representations into two components: the task-relevant ones embedding critical information associated with the task across domains, and the task-irrelevant ones with the remaining non-transferable or disturbing information. These two components are regularized by a group of task-specific objective functions across domains. Such regularization explicitly encourages disentangling and avoids the use of generative models or decoders. Experiments in complicated, open-set scenarios (retrieval tasks) and empirical benchmarks (classification tasks) demonstrate that the proposed method captures rich disentangled information and achieves superior performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源