论文标题

图像细粒度的细粒度

Image Fine-grained Inpainting

论文作者

Hui, Zheng, Li, Jie, Wang, Xiumei, Gao, Xinbo

论文摘要

图像介绍技术最近在生成对抗网络(GAN)的帮助下显示出了令人鼓舞的改进。但是,他们中的大多数经常以不合理的结构或模糊性而遭受完整的结果。为了减轻这个问题,在本文中,我们提出了一个单阶段的模型,该模型利用扩张的卷积的密集组合来获得更大,更有效的接受场。从该网络的属性中受益,我们可以更轻松地在不完整的图像中恢复大区域。为了更好地训练这种高效的发电机,除了经常使用的VGG功能匹配损失外,我们设计了一种新颖的自我引导回归损失,以专注于不确定的区域并增强语义细节。此外,我们设计了一个几何对齐约束项,以补偿预测特征和地面真相之间的基于像素的距离。我们还使用本地和全球分支机构的歧视者来确保局部全球内容的一致性。为了进一步提高生成的图像的质量,引入了本地分支上的歧视特征匹配,这会动态地最大程度地减少合成和地面真实贴片之间中间特征的相似性。在几个公共数据集上进行的广泛实验表明,我们的方法的表现优于当前最新方法。代码可在https://github.com/zheng222/dmfn上找到。

Image inpainting techniques have shown promising improvement with the assistance of generative adversarial networks (GANs) recently. However, most of them often suffered from completed results with unreasonable structure or blurriness. To mitigate this problem, in this paper, we present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields. Benefited from the property of this network, we can more easily recover large regions in an incomplete image. To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss for concentrating on uncertain areas and enhancing the semantic details. Besides, we devise a geometrical alignment constraint item to compensate for the pixel-based distance between prediction features and ground-truth ones. We also employ a discriminator with local and global branches to ensure local-global contents consistency. To further improve the quality of generated images, discriminator feature matching on the local branch is introduced, which dynamically minimizes the similarity of intermediate features between synthetic and ground-truth patches. Extensive experiments on several public datasets demonstrate that our approach outperforms current state-of-the-art methods. Code is available at https://github.com/Zheng222/DMFN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源