论文标题

Face2Face:RGB视频的实时面部捕获和重演

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

论文作者

Thies, Justus, Zollhöfer, Michael, Stamminger, Marc, Theobalt, Christian, Nießner, Matthias

论文摘要

我们介绍Face2face,这是一种新颖的方法,用于实时面部重演单眼目标视频序列(例如,YouTube视频)。源序列也是单眼视频流,它通过商品网络摄像头现场直播。我们的目标是使源演员对目标视频的面部表情进行动画动画,并以照片真实的方式重新渲染受操纵的输出视频。为此,我们首先解决了基于非刚性模型捆绑的单眼视频恢复面部身份恢复的不受约束的问题。在运行时,我们使用密集的光度一致性度量来跟踪源和目标视频的面部表情。然后,通过源和目标之间的快速有效变形转移来实现重新制定。从目标序列中检索到最适合重新定位表达式的口腔内部,并扭曲以产生准确的拟合度。最后,我们令人信服地重新渲染了相应视频流的基础上的合成目标面,以使其与现实世界的照明无缝融合。我们在实时设置中演示了我们的方法,在该设置中,YouTube视频是实时重演的。

We present Face2Face, a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源