论文标题
Turbugan:一种对抗性学习方法,用于通过湍流进行成像的空间变化多帧盲击卷积
TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence
论文作者
论文摘要
我们提出了一种通过大气湍流(称为Turbugan)进行成像的自我监管和自我校准的多拍方法。我们的方法不需要配对的培训数据,适应湍流的分布,利用特定于域的数据先验,并且可以从数十万到数千个测量值概括。我们通过适用于Cryogan的对抗传感框架来实现此类功能,该框架使用歧视器网络来匹配捕获和模拟测量的分布。我们的框架是基于Cryogan来建立的(1)通过(1)概括向前测量模型,以通过跨界的湍流来合并物理准确和计算有效的模型,以进行光传播,(2)使适应性略有指定的远期模型,以及(3)使用预周化的生产网络使用预期的生成网络,可用的范围特定的域的先验知识,可获得。我们在计算模拟和实验捕获的图像上验证了Turbugan,并用各种湍流扭曲。
We present a self-supervised and self-calibrating multi-shot approach to imaging through atmospheric turbulence, called TurbuGAN. Our approach requires no paired training data, adapts itself to the distribution of the turbulence, leverages domain-specific data priors, and can generalize from tens to thousands of measurements. We achieve such functionality through an adversarial sensing framework adapted from CryoGAN, which uses a discriminator network to match the distributions of captured and simulated measurements. Our framework builds on CryoGAN by (1) generalizing the forward measurement model to incorporate physically accurate and computationally efficient models for light propagation through anisoplanatic turbulence, (2) enabling adaptation to slightly misspecified forward models, and (3) leveraging domain-specific prior knowledge using pretrained generative networks, when available. We validate TurbuGAN on both computationally simulated and experimentally captured images distorted with anisoplanatic turbulence.