论文标题

Fairfacegan:公平感知的面部图像到图像翻译

FairFaceGAN: Fairness-aware Facial Image-to-Image Translation

论文作者

Hwang, Sunhee, Park, Sungho, Kim, Dohyung, Do, Mirae, Byun, Hyeran

论文摘要

在本文中,我们介绍了Fairfacegan,这是一种公平感知到的面部图像到图像翻译模型,从而减轻了面部属性编辑期间受保护属性(例如,性别,年龄,种族)中不需要翻译的问题。与现有模型不同,Fairfacegan学习了具有两个独立的潜伏期的公平表示形式 - 一种与要翻译的目标属性有关,另一个与它们无关。此策略使Fairfacegan能够将有关受保护属性和目标属性的信息分开。它还可以防止受保护属性中有害的翻译,而目标属性编辑。为了评估公平程度,我们在Celeba数据集上执行两种类型的实验。首先,我们分别通过现有图像翻译方法和FairFaceGAN增强数据时比较了公平感知的分类性能。此外,我们提出了一个新的公平度量指标,即受保护的属性距离(FPAD),该公平度量可以衡量保留受保护属性的效果。实验结果表明,对于现有图像翻译模型,Fairfacegan在公平性方面表现出一致的改善。此外,我们还评估了图像翻译性能,与现有方法相比,Fairfacegan显示出竞争性的结果。

In this paper, we introduce FairFaceGAN, a fairness-aware facial Image-to-Image translation model, mitigating the problem of unwanted translation in protected attributes (e.g., gender, age, race) during facial attributes editing. Unlike existing models, FairFaceGAN learns fair representations with two separate latents - one related to the target attributes to translate, and the other unrelated to them. This strategy enables FairFaceGAN to separate the information about protected attributes and that of target attributes. It also prevents unwanted translation in protected attributes while target attributes editing. To evaluate the degree of fairness, we perform two types of experiments on CelebA dataset. First, we compare the fairness-aware classification performances when augmenting data by existing image translation methods and FairFaceGAN respectively. Moreover, we propose a new fairness metric, namely Frechet Protected Attribute Distance (FPAD), which measures how well protected attributes are preserved. Experimental results demonstrate that FairFaceGAN shows consistent improvements in terms of fairness over the existing image translation models. Further, we also evaluate image translation performances, where FairFaceGAN shows competitive results, compared to those of existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源