论文标题

潜伏:通过凝视意见的分析潜在代码操纵跨域目光估算

LatentGaze: Cross-Domain Gaze Estimation through Gaze-Aware Analytic Latent Code Manipulation

论文作者

Lee, Isack, Yun, Jun-Seok, Kim, Hee Hyeon, Na, Youngju, Yoo, Seok Bong

论文摘要

尽管最近的凝视估计方法非常重视从面部或眼睛图像中提取与目光相关的特征,但如何定义包括凝视相关组件在内的特征是模棱两可的。这种晦涩的性使该模型不仅学习了凝视与之相关的功能,而且还学会了无关紧要的功能。特别是,这对于跨数据库的性能是致命的。为了克服这个具有挑战性的问题,我们提出了一种基于数据驱动的方法,具有具有生成的对抗网络反转的分离特征,以选择性地利用潜在代码中的视线特征。此外,通过利用基于GAN的编码器生成过程,我们将输入图像从目标域转移到源域图像,凝视估计器足够意识到。此外,我们建议在编码器中凝视失真损失,以防止凝视信息的失真。实验结果表明,我们的方法在跨域凝视估计任务中实现了最新的凝视估计精度。该代码可在https://github.com/leeisack/latentgaze/上找到。

Although recent gaze estimation methods lay great emphasis on attentively extracting gaze-relevant features from facial or eye images, how to define features that include gaze-relevant components has been ambiguous. This obscurity makes the model learn not only gaze-relevant features but also irrelevant ones. In particular, it is fatal for the cross-dataset performance. To overcome this challenging issue, we propose a gaze-aware analytic manipulation method, based on a data-driven approach with generative adversarial network inversion's disentanglement characteristics, to selectively utilize gaze-relevant features in a latent code. Furthermore, by utilizing GAN-based encoder-generator process, we shift the input image from the target domain to the source domain image, which a gaze estimator is sufficiently aware. In addition, we propose gaze distortion loss in the encoder that prevents the distortion of gaze information. The experimental results demonstrate that our method achieves state-of-the-art gaze estimation accuracy in a cross-domain gaze estimation tasks. This code is available at https://github.com/leeisack/LatentGaze/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源