论文标题
对gan的正则化和归一化的系统调查
A Systematic Survey of Regularization and Normalization in GANs
论文作者
论文摘要
由于深度神经网络的发展,生成的对抗网络(GAN)已在不同的情况下广泛应用。基于网络无限能力的非参数假设提出了原始的gan。但是,对于无需任何事先信息而言,GAN是否可以适应目标分布仍然未知。由于假设过高,许多问题在GAN的训练中仍然没有解决,例如非凝结,模式崩溃,梯度消失。正则化和归一化是引入先验信息以稳定训练并改善歧视的常见方法。尽管已经提出了少数数量的正规化和归一化方法,但据我们所知,除了一些范围内且有限的范围研究之外,没有全面的调查主要关注这些方法的目标和发展。在这项工作中,我们从甘恩斯培训的不同角度进行了有关正则化和归一化技术的全面调查。首先,我们系统地描述了gan训练的不同观点,从而获得了正则化和归一化的不同目标。基于这些目标,我们提出了一种新的分类法。此外,我们比较了不同数据集上主流方法的性能,并调查了经常在最新的gan中使用的正则化和归一化技术的应用。最后,我们重点介绍了该领域研究的未来潜在方向。 https://github.com/iceli1007/gans-regularization-review在这项工作中与gan的正则化和归一化有关的代码和研究总结了。
Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The original GAN was proposed based on the non-parametric assumption of the infinite capacity of networks. However, it is still unknown whether GANs can fit the target distribution without any prior information. Due to the overconfident assumption, many issues remain unaddressed in GANs' training, such as non-convergence, mode collapses, gradient vanishing. Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination. Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey that primarily focuses on objectives and development of these methods, apart from some in-comprehensive and limited scope studies. In this work, we conduct a comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training. First, we systematically describe different perspectives of GANs training and thus obtain the different objectives of regularization and normalization. Based on these objectives, we propose a new taxonomy. Furthermore, we compare the performance of the mainstream methods on different datasets and investigate the applications of regularization and normalization techniques that have been frequently employed in state-of-the-art GANs. Finally, we highlight potential future directions of research in this domain. Code and studies related to the regularization and normalization of GANs in this work is summarized on https://github.com/iceli1007/GANs-Regularization-Review.