论文标题
Stylitgan:提示Stylegan产生新的照明条件
StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions
论文作者
论文摘要
我们提出了一种新型方法,即Stylitgan,用于在没有标记数据的情况下重新铺面产生的图像。我们的方法生成具有逼真的照明效果的图像,包括铸造阴影,柔和的阴影,反射和光泽效果,而无需配对或CGI数据。 Stylitgan使用固有的图像方法分解图像,然后搜索预训练的样式的潜在空间,以识别一组方向。通过提示该模型修复一个组件(例如,反击)并改变另一个组件(例如阴影),我们通过将确定的方向添加到潜在样式代码中来生成重新的图像。反照率和照明多样性变化的定量指标使我们能够使用前向选择过程选择有效的方向。定性评估证实了我们方法的有效性。
We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data. Our approach generates images with realistic lighting effects, including cast shadows, soft shadows, inter-reflections, and glossy effects, without the need for paired or CGI data. StyLitGAN uses an intrinsic image method to decompose an image, followed by a search of the latent space of a pre-trained StyleGAN to identify a set of directions. By prompting the model to fix one component (e.g., albedo) and vary another (e.g., shading), we generate relighted images by adding the identified directions to the latent style codes. Quantitative metrics of change in albedo and lighting diversity allow us to choose effective directions using a forward selection process. Qualitative evaluation confirms the effectiveness of our method.