论文标题

FairStyle:带有风格频道操纵的deriasing stylegan2

FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations

论文作者

Karakas, Cemre, Dirik, Alara, Yalcinkaya, Eylul, Yanardag, Pinar

论文摘要

生成对抗网络的最新进展表明,可以生成高分辨率和超现实主义图像。但是,GAN制作的图像仅与受过训练的数据集一样公平和代表性。在本文中,我们提出了一种直接修改预先训练的stylegan2模型的方法,该模型可用于生成相对于一个图像(例如,眼镜)或更多属性(例如,性别和眼镜)的平衡图像。我们的方法利用StyleGAN2模型的样式空间来执行对目标属性的分离控制。我们的方法不需要培训其他模型,并直接为GAN模型而直接辩解,为其在各种下游应用程序中的使用铺平了道路。我们的实验表明,我们的方法在几分钟之内成功地辩解了GAN模型,而不会损害生成的图像的质量。为了宣传公平生成的模型,我们在http://catlab-team.github.io/fairstyle上共享代码和DEBIAS模型。

Recent advances in generative adversarial networks have shown that it is possible to generate high-resolution and hyperrealistic images. However, the images produced by GANs are only as fair and representative as the datasets on which they are trained. In this paper, we propose a method for directly modifying a pre-trained StyleGAN2 model that can be used to generate a balanced set of images with respect to one (e.g., eyeglasses) or more attributes (e.g., gender and eyeglasses). Our method takes advantage of the style space of the StyleGAN2 model to perform disentangled control of the target attributes to be debiased. Our method does not require training additional models and directly debiases the GAN model, paving the way for its use in various downstream applications. Our experiments show that our method successfully debiases the GAN model within a few minutes without compromising the quality of the generated images. To promote fair generative models, we share the code and debiased models at http://catlab-team.github.io/fairstyle.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源