论文标题
极端图像转化对人类和机器的影响不同
Extreme Image Transformations Affect Humans and Machines Differently
论文作者
论文摘要
一些最近的人工神经网络(ANN)声称模拟灵长类神经和人类绩效数据的各个方面。但是,它们在对象识别方面的成功取决于利用低级功能以人类不这样做的方式解决视觉任务。结果,分布或对抗性输入通常对ANN具有挑战性。相反,人类学习抽象模式,并且大多不受许多极端图像扭曲的影响。我们介绍了一组受神经生理发现启发的新型图像变换,并在对象识别任务上评估人和环境。我们表明,机器的表现要比人类表现更好,并且在某些转型方面努力与人类在人类方面的表现较轻。我们量化了人类和机器的准确性差异,并为人类数据转换找到难度的排名。我们还建议如何适应人类视觉处理的某些特征,以提高我们难以进行的手机变换的ANN的性能。
Some recent artificial neural networks (ANNs) claim to model aspects of primate neural and human performance data. Their success in object recognition is, however, dependent on exploiting low-level features for solving visual tasks in a way that humans do not. As a result, out-of-distribution or adversarial input is often challenging for ANNs. Humans instead learn abstract patterns and are mostly unaffected by many extreme image distortions. We introduce a set of novel image transforms inspired by neurophysiological findings and evaluate humans and ANNs on an object recognition task. We show that machines perform better than humans for certain transforms and struggle to perform at par with humans on others that are easy for humans. We quantify the differences in accuracy for humans and machines and find a ranking of difficulty for our transforms for human data. We also suggest how certain characteristics of human visual processing can be adapted to improve the performance of ANNs for our difficult-for-machines transforms.