论文标题

商业面部检测模型是否像学术模型一样有偏见?

Are Commercial Face Detection Models as Biased as Academic Models?

论文作者

Dooley, Samuel, Wei, George Z., Goldstein, Tom, Dickerson, John P.

论文摘要

随着面部识别系统的部署更广泛,学者和激进主义者研究了他们的偏见和危害。审核通常用于完成此操作,并将算法面部识别系统与数据集进行比较,并与有关图像主题的各种元数据标签进行比较。开创性的作品发现性别表达,年龄,感知种族,皮肤类型等在绩效方面存在差异。这些研究和审核经常检查算法分为两类:学术模型或商业模型。我们提出了学术和商业面部检测系统之间的详细比较,特别研究了噪声的鲁棒性。我们发现,最先进的学术探测模型在噪音稳健性上表现出人口统计学差异,特别是通过对老年人和以男性表现出性别的人的统计学意义下降。当我们将这些差异与商业模型的差异进行比较时,我们得出的结论是,与他们相对较大的发展预算和行业水平的公平承诺相比,商业模型总是比学术模型一样有偏见或更偏见。

As facial recognition systems are deployed more widely, scholars and activists have studied their biases and harms. Audits are commonly used to accomplish this and compare the algorithmic facial recognition systems' performance against datasets with various metadata labels about the subjects of the images. Seminal works have found discrepancies in performance by gender expression, age, perceived race, skin type, etc. These studies and audits often examine algorithms which fall into two categories: academic models or commercial models. We present a detailed comparison between academic and commercial face detection systems, specifically examining robustness to noise. We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness, specifically by having statistically significant decreased performance on older individuals and those who present their gender in a masculine manner. When we compare the size of these disparities to that of commercial models, we conclude that commercial models - in contrast to their relatively larger development budget and industry-level fairness commitments - are always as biased or more biased than an academic model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源