论文标题

无监督的领域适应多个领域歧视者和自适应自我训练

Unsupervised Domain Adaptation with Multiple Domain Discriminators and Adaptive Self-Training

论文作者

Spadotto, Teo, Toldo, Marco, Michieli, Umberto, Zanuttigh, Pietro

论文摘要

无监督的域适应性(UDA)旨在提高在源域上训练的模型的概括能力,以在没有标记数据的目标域上表现良好。在本文中,我们考虑了城市场景的语义分割,并提出了一种方法,以适应培训的合成数据的深度神经网络,以解决解决两个不同数据分布之间域变化的真实场景。我们介绍了一个新颖的UDA框架,在该框架中,对抗模块和旨在使两个域分布对齐的自训练策略支持标记的合成数据的标准监督损失。对抗模块是由几个完全卷积的歧视器驱动的,这些歧视器处理了不同的领域:第一个区分是在地面真理和生成的地图之间,而在合成或真实世界数据的分段图之间的第二个分段图之间的第二个歧视。自我训练模块利用了歧视者对未标记数据估计的置信度,以选择用于加强学习过程的区域。此外,基于人均整体置信度的自适应机制将信心阈值。实验结果证明了拟议策略在适应在GTA5和Synthia等合成数据集中训练的分割网络中的有效性,以对CityScapes和Mapillary等现实世界数据集进行调整。

Unsupervised Domain Adaptation (UDA) aims at improving the generalization capability of a model trained on a source domain to perform well on a target domain for which no labeled data is available. In this paper, we consider the semantic segmentation of urban scenes and we propose an approach to adapt a deep neural network trained on synthetic data to real scenes addressing the domain shift between the two different data distributions. We introduce a novel UDA framework where a standard supervised loss on labeled synthetic data is supported by an adversarial module and a self-training strategy aiming at aligning the two domain distributions. The adversarial module is driven by a couple of fully convolutional discriminators dealing with different domains: the first discriminates between ground truth and generated maps, while the second between segmentation maps coming from synthetic or real world data. The self-training module exploits the confidence estimated by the discriminators on unlabeled data to select the regions used to reinforce the learning process. Furthermore, the confidence is thresholded with an adaptive mechanism based on the per-class overall confidence. Experimental results prove the effectiveness of the proposed strategy in adapting a segmentation network trained on synthetic datasets like GTA5 and SYNTHIA, to real world datasets like Cityscapes and Mapillary.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源