论文标题
跨模式感知主义者:可以从声音中收集面部几何形状吗?
Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?
论文作者
论文摘要
这项工作在人类的感知中挖掘出一个根部问题:可以从一个人的声音中收集面部几何形状吗?研究这个问题的先前工作仅采用图像综合中的发展,并将声音转换为面部图像以显示相关性,但是不可避免地在图像域上工作涉及预测声音无法暗示的属性,包括面部纹理,发型和背景。相反,我们研究了重建3D面部仅专注于几何形状的能力,这在生理上是更基于的。我们在监督和无监督的学习下提出了分析框架,即跨模式感知主义者。首先,我们构建了一个数据集Voxceleb-3D,该数据集扩展了Voxceleb,并包含配对的声音和面部网格,从而使监督学习成为可能。其次,我们使用知识蒸馏机制来研究是否仍然可以从没有配对的声音和3D面部数据的3D面部扫描的情况下从声音中收集面部几何形状。我们将核心问题分为四个部分,并进行视觉和数值分析作为对核心问题的回答。我们的发现在生理学和神经科学方面关于声音与面部结构之间的相关性的人都呼应。这项工作为未来的以人为中心的跨模式学习提供了可解释的基础。请参阅我们的项目页面:https://choyingw.github.io/works/voice2mesh/index.html
This work digs into a root question in human perception: can face geometry be gleaned from one's voices? Previous works that study this question only adopt developments in image synthesis and convert voices into face images to show correlations, but working on the image domain unavoidably involves predicting attributes that voices cannot hint, including facial textures, hairstyles, and backgrounds. We instead investigate the ability to reconstruct 3D faces to concentrate on only geometry, which is much more physiologically grounded. We propose our analysis framework, Cross-Modal Perceptionist, under both supervised and unsupervised learning. First, we construct a dataset, Voxceleb-3D, which extends Voxceleb and includes paired voices and face meshes, making supervised learning possible. Second, we use a knowledge distillation mechanism to study whether face geometry can still be gleaned from voices without paired voices and 3D face data under limited availability of 3D face scans. We break down the core question into four parts and perform visual and numerical analyses as responses to the core question. Our findings echo those in physiology and neuroscience about the correlation between voices and facial structures. The work provides future human-centric cross-modal learning with explainable foundations. See our project page: https://choyingw.github.io/works/Voice2Mesh/index.html