论文标题
SRTGAN:基于三重损失的现实世界超级分辨率的生成对抗网络
SRTGAN: Triplet Loss based Generative Adversarial Network for Real-World Super-Resolution
论文作者
论文摘要
许多应用,例如法医,监视,卫星成像,医学成像等,都需要高分辨率(HR)图像。但是,由于光学传感器的局限性及其成本,并非总是可能获得人力资源图像。一种称为单图像超分辨率(SISR)的替代解决方案是一种软件驱动的方法,旨在采用低分辨率(LR)图像并获得HR图像。大多数受监管的SISR解决方案都使用地面真实人力资源图像作为目标,并且不包括LR图像中提供的信息,这可能是有价值的。在这项工作中,我们将基于三重损失的生成对抗网络介绍为现实世界中的图像超分辨率问题的Srtgan。我们引入了一个新的基于三胞胎的对抗损耗函数,该功能通过将其用作负样本来利用LR图像中提供的信息。允许基于贴片的鉴别器访问HR和LR图像,以更好地区分HR和LR图像;因此,改善对手。此外,我们建议以高感知忠诚度融合对抗性损失,内容损失,感知损失和质量损失,以获得超分辨率(SR)图像。在定量和定性指标方面,我们验证了所提出的方法的出色性能,而不是REALSR数据集上的其他现有方法。
Many applications such as forensics, surveillance, satellite imaging, medical imaging, etc., demand High-Resolution (HR) images. However, obtaining an HR image is not always possible due to the limitations of optical sensors and their costs. An alternative solution called Single Image Super-Resolution (SISR) is a software-driven approach that aims to take a Low-Resolution (LR) image and obtain the HR image. Most supervised SISR solutions use ground truth HR image as a target and do not include the information provided in the LR image, which could be valuable. In this work, we introduce Triplet Loss-based Generative Adversarial Network hereafter referred as SRTGAN for Image Super-Resolution problem on real-world degradation. We introduce a new triplet-based adversarial loss function that exploits the information provided in the LR image by using it as a negative sample. Allowing the patch-based discriminator with access to both HR and LR images optimizes to better differentiate between HR and LR images; hence, improving the adversary. Further, we propose to fuse the adversarial loss, content loss, perceptual loss, and quality loss to obtain Super-Resolution (SR) image with high perceptual fidelity. We validate the superior performance of the proposed method over the other existing methods on the RealSR dataset in terms of quantitative and qualitative metrics.