论文标题

种族偏见在数据和机器学习算法中的因果影响对用户说服力和歧视性决策:一项实证研究

Causal effect of racial bias in data and machine learning algorithms on user persuasiveness & discriminatory decision making: An Empirical Study

论文作者

Sengupta, Kinshuk, Srivastava, Praveen Ranjan

论文摘要

语言数据和模型表现出各种类型的偏见,无论是种族,宗教,性别还是社会经济。 AI/NLP模型在对种族偏见的数据集进行培训时,AI/NLP模型促使模型的解释性不佳,影响了决策过程中的用户经验,从而进一步放大了社会偏见,从而对社会产生了深远的道德意义。该研究的动机是研究AI系统如何吸收数据中的偏见并产生无法解释的歧视性结果,并影响个人由于数据集中存在种族偏见特征而影响个人对系统结果的表达。实验的设计涉及研究语言数据集中存在的种族偏见特征的反事实影响及其对模型结果的相关影响。采用了混合研究方法来研究偏见模型对用户经验的交叉意义,通过受控实验室实验对决策的影响。这些发现提供了基础的支持,以关联到由数据集中提出的有偏见概念而解决的人工智能模型解决NLP任务的含义。此外,研究结果证明了对用户说服力的负面影响是合理的,这会导致在试图依靠模型结果行动时改变个人的决策商。该论文弥合了由于系统设计不平等而建立贫困客户可信赖造成的危害的差距,并为研究人员,政策制定者和数据科学家提供了强有力的支持,以在组织内建立负责任的AI框架。

Language data and models demonstrate various types of bias, be it ethnic, religious, gender, or socioeconomic. AI/NLP models, when trained on the racially biased dataset, AI/NLP models instigate poor model explainability, influence user experience during decision making and thus further magnifies societal biases, raising profound ethical implications for society. The motivation of the study is to investigate how AI systems imbibe bias from data and produce unexplainable discriminatory outcomes and influence an individual's articulateness of system outcome due to the presence of racial bias features in datasets. The design of the experiment involves studying the counterfactual impact of racial bias features present in language datasets and its associated effect on the model outcome. A mixed research methodology is adopted to investigate the cross implication of biased model outcome on user experience, effect on decision-making through controlled lab experimentation. The findings provide foundation support for correlating the implication of carry-over an artificial intelligence model solving NLP task due to biased concept presented in the dataset. Further, the research outcomes justify the negative influence on users' persuasiveness that leads to alter the decision-making quotient of an individual when trying to rely on the model outcome to act. The paper bridges the gap across the harm caused in establishing poor customer trustworthiness due to an inequitable system design and provides strong support for researchers, policymakers, and data scientists to build responsible AI frameworks within organizations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源