论文标题

语义意识的单发脸重制和密集的对应估计

Semantic-aware One-shot Face Re-enactment with Dense Correspondence Estimation

论文作者

Liu, Yunfan, Li, Qi, Sun, Zhenan, Tan, Tieniu

论文摘要

由于源头和驾驶面之间的身份不匹配,因此,一次性面部重演是一项艰巨的任务。具体而言,驾驶受试者的次优置角身份信息将不可避免地干扰重新制定结果并导致面部形状失真。为了解决这个问题,本文提议使用3D形态模型(3DMM)进行明确的面部语义分解和身份分解。我们没有使用3D系数进行重新制定控制,而是利用3DMM呈现纹理面部代理的生成能力的优势。这些代理包含人脸的丰富而紧凑的几何和语义信息,这使我们能够通过估计密集的对应关系来计算源和驾驶图像之间的面部运动场。通过这种方式,我们可以通过根据运动场来扭曲源图像来近似重新制定结果,并采用生成对抗网络(GAN)来进一步提高翘曲结果的视觉质量。在各种数据集上进行的广泛实验证明了所提出的方法在身份保存和重新制定实现方面的现有启动基准测试的优势。

One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces. Specifically, the suboptimally disentangled identity information of driving subjects would inevitably interfere with the re-enactment results and lead to face shape distortion. To solve this problem, this paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement. Instead of using 3D coefficients alone for re-enactment control, we take the advantage of the generative ability of 3DMM to render textured face proxies. These proxies contain abundant yet compact geometric and semantic information of human faces, which enable us to compute the face motion field between source and driving images by estimating the dense correspondence. In this way, we could approximate re-enactment results by warping source images according to the motion field, and a Generative Adversarial Network (GAN) is adopted to further improve the visual quality of warping results. Extensive experiments on various datasets demonstrate the advantages of the proposed method over existing start-of-the-art benchmarks in both identity preservation and re-enactment fulfillment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源