论文标题
通过自我调节的gan生成多样化的图像
Diverse Image Generation via Self-Conditioned GANs
论文作者
论文摘要
我们介绍了一种简单但有效的无监督方法,用于生成现实和多样化的图像。我们在不使用手动注释的类标签的情况下训练类条件的GAN模型。取而代之的是,我们的模型是基于自动从歧视器特征空间中的聚类得出的标签。我们的聚类步骤会自动发现各种模式,并明确要求发电机覆盖它们。在标准模式折叠基准上进行的实验表明,在解决模式崩溃时,我们的方法优于几种竞争方法。与以前的方法相比,我们的方法在ImageNet和Place365等大型数据集(例如ImageNet和Place365)上的性能也很好,可改善图像多样性和标准质量指标。
We introduce a simple but effective unsupervised method for generating realistic and diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space. Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them. Experiments on standard mode collapse benchmarks show that our method outperforms several competing methods when addressing mode collapse. Our method also performs well on large-scale datasets such as ImageNet and Places365, improving both image diversity and standard quality metrics, compared to previous methods.