论文标题

无监督域的适应性,使用直方图门控图像翻译进行延迟IC图像分析

Unsupervised Domain Adaptation with Histogram-gated Image Translation for Delayered IC Image Analysis

论文作者

Tee, Yee-Yang, Cheng, Deruo, Chee, Chye-Soon, Lin, Tong, Shi, Yiqiong, Gwee, Bah-Hwee

论文摘要

深度学习通过使用卷积神经网络(CNN)进行电路结构的分割,在具有挑战性的电路注释任务中取得了巨大的成功。深度学习方法需要大量手动注释的培训数据才能实现良好的性能,如果将在给定数据集中培训的深度学习模型应用于其他数据集,则可能会导致性能下降。这通常被称为电路注释的域移位问题,这源于不同图像数据集的分布的巨大变化。可以从单个设备中的不同设备或不同层获得不同的图像数据集。为了解决域移位问题,我们提出了直方图门控图像翻译(HGIT),这是一种无监督的域自适应框架,将图像从给定的源数据集转换为目标数据集的域,并利用转换的图像来训练段网络。具体而言,我们的HGIT执行基于生成的对抗网络(GAN)的图像翻译,并利用直方图统计数据进行数据策划。实验是在适用于三个不同目标数据集的单个标记源数据集上进行的(无标签用于培训),并对每个目标数据集进行了分割性能。我们已经证明,与报道的域适应技术相比,我们的方法达到了最佳性能,并且还可以合理地接近完全监督的基准。

Deep learning has achieved great success in the challenging circuit annotation task by employing Convolutional Neural Networks (CNN) for the segmentation of circuit structures. The deep learning approaches require a large amount of manually annotated training data to achieve a good performance, which could cause a degradation in performance if a deep learning model trained on a given dataset is applied to a different dataset. This is commonly known as the domain shift problem for circuit annotation, which stems from the possibly large variations in distribution across different image datasets. The different image datasets could be obtained from different devices or different layers within a single device. To address the domain shift problem, we propose Histogram-gated Image Translation (HGIT), an unsupervised domain adaptation framework which transforms images from a given source dataset to the domain of a target dataset, and utilize the transformed images for training a segmentation network. Specifically, our HGIT performs generative adversarial network (GAN)-based image translation and utilizes histogram statistics for data curation. Experiments were conducted on a single labeled source dataset adapted to three different target datasets (without labels for training) and the segmentation performance was evaluated for each target dataset. We have demonstrated that our method achieves the best performance compared to the reported domain adaptation techniques, and is also reasonably close to the fully supervised benchmark.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源