论文标题

无监督的学习面部参数回归器,用于通过可区分的渲染器进行操作单位强度估算

Unsupervised Learning Facial Parameter Regressor for Action Unit Intensity Estimation via Differentiable Renderer

论文作者

Song, Xinhui, Shi, Tianyang, Feng, Zunlei, Song, Mingli, Lin, Jackie, Lin, Chuanjie, Fan, Changjie, Yuan, Yi

论文摘要

面部动作单元(AU)强度是描述所有视觉上可见面部运动的指数。大多数现有方法使用有限的AU数据学习强度估计器,而它们缺乏数据集中的概括能力。在本文中,我们提出了一个框架,以预测基于骨驱动的面部模型(BDFM)在不同视图下的面部参数(包括身份参数和AU参数)。提出的框架由特征提取器,生成器和面部参数回归器组成。回归器可以在发电机的帮助下从单个面部图像中符合BDFM的物理意义参数,该发电机将面部参数映射到游戏面图像作为可区分的渲染器。此外,身份丧失,回环损失和对抗损失可以改善回归结果。定量评估是在两个公共数据库BP4D和DISFA上进行的,这表明所提出的方法比最先进的方法可以实现可比或更好的性能。更重要的是,定性结果还证明了我们在野外方法的有效性。

Facial action unit (AU) intensity is an index to describe all visually discernible facial movements. Most existing methods learn intensity estimator with limited AU data, while they lack generalization ability out of the dataset. In this paper, we present a framework to predict the facial parameters (including identity parameters and AU parameters) based on a bone-driven face model (BDFM) under different views. The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor. The regressor can fit the physical meaning parameters of the BDFM from a single face image with the help of the generator, which maps the facial parameters to the game-face images as a differentiable renderer. Besides, identity loss, loopback loss, and adversarial loss can improve the regressive results. Quantitative evaluations are performed on two public databases BP4D and DISFA, which demonstrates that the proposed method can achieve comparable or better performance than the state-of-the-art methods. What's more, the qualitative results also demonstrate the validity of our method in the wild.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源