论文标题
机器人学习中的公平和偏见
Fairness and Bias in Robot Learning
论文作者
论文摘要
机器学习显着增强了机器人的能力,使他们能够在人类环境中执行广泛的任务并适应我们不确定的现实世界。各种机器学习领域的最新作品强调了公平性的重要性,以确保这些算法不会再现人类的偏见并导致歧视性结果。随着机器人学习系统在我们的日常生活中越来越多地执行越来越多的任务,了解这种偏见的影响至关重要,以防止对某些人群的意外行为。在这项工作中,我们从跨学科的角度介绍了关于技术,道德和法律挑战的第一个关于机器人学习的公平性调查。我们提出了偏见来源的分类学和由此产生的歧视类型。使用来自不同机器人学习领域的示例,我们研究了不公平结果和减轻策略的方案。我们通过涵盖不同的公平定义,道德和法律考虑以及公平机器人学习的方法来介绍该领域的早期进步。通过这项工作,我们旨在为公平机器人学习的开创性发展铺平道路。
Machine learning has significantly enhanced the abilities of robots, enabling them to perform a wide range of tasks in human environments and adapt to our uncertain real world. Recent works in various machine learning domains have highlighted the importance of accounting for fairness to ensure that these algorithms do not reproduce human biases and consequently lead to discriminatory outcomes. With robot learning systems increasingly performing more and more tasks in our everyday lives, it is crucial to understand the influence of such biases to prevent unintended behavior toward certain groups of people. In this work, we present the first survey on fairness in robot learning from an interdisciplinary perspective spanning technical, ethical, and legal challenges. We propose a taxonomy for sources of bias and the resulting types of discrimination due to them. Using examples from different robot learning domains, we examine scenarios of unfair outcomes and strategies to mitigate them. We present early advances in the field by covering different fairness definitions, ethical and legal considerations, and methods for fair robot learning. With this work, we aim to pave the road for groundbreaking developments in fair robot learning.