论文标题
AI中的物种主义偏见 - AI的应用如何使歧视和对动物的不公平结局永久存在
Speciesist bias in AI -- How AI applications perpetuate discrimination and unfair outcomes against animals
论文作者
论文摘要
为了减少数据和算法的偏见,以使AI应用程序公平。这些努力受到各种备受瞩目的案例的推动,在这些案例中,偏见的算法决策对妇女,有色人种,少数族裔等造成了伤害。但是,AI公平领域仍然屈服于盲点,即对动物的歧视不敏感。本文是第一个描述“物种主义偏见”并在几个不同的AI系统中进行调查的文章。当物种偏见在物种主义模式占上风的数据集中训练时,通过AI应用学习并巩固了物种偏见。这些模式可以在图像识别系统,大语言模型和推荐系统中找到。因此,人工智能技术目前在使对动物的暴力持续和规范化中发挥着重要作用。只有当AI公平框架扩大其范围并包括物种主义偏见的缓解措施时,才能更改。本文在这方面介绍了AI社区,并强调了AI系统对增加或减少对动物造成的暴力,尤其是养殖动物所造成的暴力的影响。
Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. This paper is the first to describe the 'speciesist bias' and investigate it in several different AI systems. Speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. These patterns can be found in image recognition systems, large language models, and recommender systems. Therefore, AI technologies currently play a significant role in perpetuating and normalizing violence against animals. This can only be changed when AI fairness frameworks widen their scope and include mitigation measures for speciesist biases. This paper addresses the AI community in this regard and stresses the influence AI systems can have on either increasing or reducing the violence that is inflicted on animals, and especially on farmed animals.