论文标题
VBLC:在不利条件下的域自适应语义分割的可见性提升和logit-constraint学习
VBLC: Visibility Boosting and Logit-Constraint Learning for Domain Adaptive Semantic Segmentation under Adverse Conditions
论文作者
论文摘要
在不利条件下,在正常视觉条件下对目标域进行训练的模型在实际系统中要求。一种普遍的解决方案是弥合清晰和不利条件图像之间的域间隙,以对目标进行令人满意的预测。但是,以前的方法通常会在从正常情况下拍摄的相同场景的其他参考图像估计,这在现实中很难收集。此外,其中大多数主要集中在单个不良状态上,例如夜间或有雾,在遇到其他不良天气时削弱了模型的多功能性。为了克服上述局限性,我们提出了一个新颖的框架,可见性的提升和logit-constraint学习(VBLC),该框架是针对出色的正常对不良适应的。 VBLC探讨了消除参考图像并同时解决不利条件的混合物的潜力。详细说明,我们首先提出可见性提升模块,以通过图像级别的某些先验动态改善目标图像。然后,我们找出自我训练方法的常规跨膜损失中的过度自信缺点,并设计了logit-constraint学习,该学习在训练过程中对logit输出进行了约束,以减轻该疼痛点。据我们所知,这是解决这一艰巨的任务的新观点。对两个正常的域自适应基准测试基准进行了广泛的实验,即城市景观 - > aCDC和CityScapes-> FoggyCityScapes + RainCityScapes,验证VBLC的有效性,它在其中建立了新的艺术状态。代码可在https://github.com/bit-da/vblc上找到。
Generalizing models trained on normal visual conditions to target domains under adverse conditions is demanding in the practical systems. One prevalent solution is to bridge the domain gap between clear- and adverse-condition images to make satisfactory prediction on the target. However, previous methods often reckon on additional reference images of the same scenes taken from normal conditions, which are quite tough to collect in reality. Furthermore, most of them mainly focus on individual adverse condition such as nighttime or foggy, weakening the model versatility when encountering other adverse weathers. To overcome the above limitations, we propose a novel framework, Visibility Boosting and Logit-Constraint learning (VBLC), tailored for superior normal-to-adverse adaptation. VBLC explores the potential of getting rid of reference images and resolving the mixture of adverse conditions simultaneously. In detail, we first propose the visibility boost module to dynamically improve target images via certain priors in the image level. Then, we figure out the overconfident drawback in the conventional cross-entropy loss for self-training method and devise the logit-constraint learning, which enforces a constraint on logit outputs during training to mitigate this pain point. To the best of our knowledge, this is a new perspective for tackling such a challenging task. Extensive experiments on two normal-to-adverse domain adaptation benchmarks, i.e., Cityscapes -> ACDC and Cityscapes -> FoggyCityscapes + RainCityscapes, verify the effectiveness of VBLC, where it establishes the new state of the art. Code is available at https://github.com/BIT-DA/VBLC.