论文标题
GDNA:迈向生成详细的神经化身
gDNA: Towards Generative Detailed Neural Avatars
论文作者
论文摘要
为了使3D人类的头像广泛可用,我们必须能够产生各种具有不同身份和形状的3D虚拟人类。由于穿着的身体形状的多样性,复杂的表达以及服装中随之而来的随机几何细节,这项任务是具有挑战性的。因此,当前代表3D人员的方法没有提供服装人员的完整生成模型。在本文中,我们提出了一种新颖的方法,该方法学会生成各种服装中具有相应皮肤重量的人的详细3D形状。具体而言,我们设计了一个多主题向前的皮肤模块,该模块仅从每个受试者的少数姿势,未填充的扫描中学到。为了捕获服装中高频细节的随机性质,我们利用一种对抗性损失公式,鼓励模型捕获基本的统计数据。我们提供的经验证据表明,这导致了现实的当地细节,例如皱纹。我们表明,我们的模型能够生成天然的人类化身,穿着各种各样的衣服。此外,我们表明我们的方法可以用于将人类模型拟合到原始扫描,表现优于先前的最新扫描的任务。
To make 3D human avatars widely available, we must be able to generate a variety of 3D virtual humans with varied identities and shapes in arbitrary poses. This task is challenging due to the diversity of clothed body shapes, their complex articulations, and the resulting rich, yet stochastic geometric detail in clothing. Hence, current methods to represent 3D people do not provide a full generative model of people in clothing. In this paper, we propose a novel method that learns to generate detailed 3D shapes of people in a variety of garments with corresponding skinning weights. Specifically, we devise a multi-subject forward skinning module that is learned from only a few posed, un-rigged scans per subject. To capture the stochastic nature of high-frequency details in garments, we leverage an adversarial loss formulation that encourages the model to capture the underlying statistics. We provide empirical evidence that this leads to realistic generation of local details such as wrinkles. We show that our model is able to generate natural human avatars wearing diverse and detailed clothing. Furthermore, we show that our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.