论文标题

卡车并不意味着特朗普:在图像分析中诊断人为错误

Trucks Don't Mean Trump: Diagnosing Human Error in Image Analysis

论文作者

Zamfirescu-Pereira, J. D., Chen, Jerry, Wen, Emily, Koenecke, Allison, Garg, Nikhil, Pierson, Emma

论文摘要

算法提供了可检测和剖析人类偏见和错误的强大工具。在这里,我们开发了机器学习方法,以分析人类在特定高风险任务中如何犯错:图像解释。我们根据Google Street View Image在2020年的美国大选中,在2020年美国选举中是否投票支持唐纳德·特朗普或乔·拜登的独特数据集,该数据集是16,135,392人的预测。我们表明,通过训练每个图像的贝叶斯最佳决策的机器学习估计器,我们可以将人为错误的可操作分解为偏见,差异和噪声项,并进一步识别导致人类误入歧途的特定特征(例如皮卡车)。我们的方法可以应用于确保人类在循环决策中是准确而公平的,并且也适用于Black-Box算法系统。

Algorithms provide powerful tools for detecting and dissecting human bias and error. Here, we develop machine learning methods to to analyze how humans err in a particular high-stakes task: image interpretation. We leverage a unique dataset of 16,135,392 human predictions of whether a neighborhood voted for Donald Trump or Joe Biden in the 2020 US election, based on a Google Street View image. We show that by training a machine learning estimator of the Bayes optimal decision for each image, we can provide an actionable decomposition of human error into bias, variance, and noise terms, and further identify specific features (like pickup trucks) which lead humans astray. Our methods can be applied to ensure that human-in-the-loop decision-making is accurate and fair and are also applicable to black-box algorithmic systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源