论文标题

培训具有有限数据的生成对抗网络

Training Generative Adversarial Networks with Limited Data

论文作者

Karras, Tero, Aittala, Miika, Hellsten, Janne, Laine, Samuli, Lehtinen, Jaakko, Aila, Timo

论文摘要

使用太少数据的训练生成对抗网络(GAN)通常会导致歧视器过度拟合,从而导致培训分歧。我们提出了一种自适应歧视者的增强机制,该机制可显着稳定有限数据制度的培训。该方法不需要更改损失功能或网络体系结构,并且在从头开始训练以及在另一个数据集上微调现有gan时都适用。我们在几个数据集上证明,现在只使用几千次培训图像可以使用良好的结果,通常将stylegan2结果匹配,图像较少。我们预计这将为gan打开新的应用程序域。我们还发现,广泛使用的CIFAR-10实际上是有限的数据基准,并将记录FID从5.59提高到2.42。

Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源