论文标题

网格使用图形卷积网络指导单发脸重演

Mesh Guided One-shot Face Reenactment using Graph Convolutional Networks

论文作者

Yao, Guangming, Yuan, Yi, Shao, Tianjia, Zhou, Kun

论文摘要

面部重演旨在使源面部图像为驱动图像提供的不同姿势和表达方式进行动画动画。现有的方法要么是为特定身份而设计的,要么在一次性或少数场景中遭受身份保存问题。在本文中,我们介绍了一种用于一次性面部重演的方法,该方法使用了重建的3D网格(即,源网格和驱动网格)作为指导,以学习重新制作的面部合成所需的光流。从技术上讲,我们明确地将驾驶面的身份信息排除在重建的驾驶网格中。这样,我们的网络可以专注于源面的运动估计,而不会干扰驾驶面的形状。我们提出了一个运动网以学习面部运动,这是一种不对称的自动编码器。编码器是图形卷积网络(GCN),它从网格中学习潜在运动向量,并且解码器用于从带有CNN的潜在矢量中产生光流图像。与以前使用稀疏关键点指导光流学习的方法相比,我们的运动网直接从3D密集的网格中学习了光流,从而为光流提供了详细的形状和姿势信息,因此它可以实现更准确的表达并在重演的面部姿势。广泛的实验表明,我们的方法可以在定性和定量比较中产生高质量的结果,并且胜过最先进的方法。

Face reenactment aims to animate a source face image to a different pose and expression provided by a driving image. Existing approaches are either designed for a specific identity, or suffer from the identity preservation problem in the one-shot or few-shot scenarios. In this paper, we introduce a method for one-shot face reenactment, which uses the reconstructed 3D meshes (i.e., the source mesh and driving mesh) as guidance to learn the optical flow needed for the reenacted face synthesis. Technically, we explicitly exclude the driving face's identity information in the reconstructed driving mesh. In this way, our network can focus on the motion estimation for the source face without the interference of driving face shape. We propose a motion net to learn the face motion, which is an asymmetric autoencoder. The encoder is a graph convolutional network (GCN) that learns a latent motion vector from the meshes, and the decoder serves to produce an optical flow image from the latent vector with CNNs. Compared to previous methods using sparse keypoints to guide the optical flow learning, our motion net learns the optical flow directly from 3D dense meshes, which provide the detailed shape and pose information for the optical flow, so it can achieve more accurate expression and pose on the reenacted face. Extensive experiments show that our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源