论文标题

语言模型的地理和地缘政治偏见

Geographic and Geopolitical Biases of Language Models

论文作者

Faisal, Fahim, Anastasopoulos, Antonios

论文摘要

审计的语言模型(PLM)通常无法公平地代表某些世界区域的目标用户,因为培训数据集中这些区域的代表性不足。由于最近对大量数据源培训的PLM,由于其黑盒性质和数据源的巨大规模,量化其潜在偏见是困难的。在这项工作中,我们设计了一种研究PLM中存在的地理偏见(和知识)的方法,并提出了采用自我调节方法以及实体国家映射的地理代理探测框架。我们的发现表明,PLM的表示形式在国家到国家的关联方面出奇地映射到了物理世界,但是这种知识在各种语言上不平等地共享。最后,我们解释了尽管表现出地理近端的概念,但在推理时过度膨胀了地缘政治偏爱。

Pretrained language models (PLMs) often fail to fairly represent target users from certain world regions because of the under-representation of those regions in training datasets. With recent PLMs trained on enormous data sources, quantifying their potential biases is difficult, due to their black-box nature and the sheer scale of the data sources. In this work, we devise an approach to study the geographic bias (and knowledge) present in PLMs, proposing a Geographic-Representation Probing Framework adopting a self-conditioning method coupled with entity-country mappings. Our findings suggest PLMs' representations map surprisingly well to the physical world in terms of country-to-country associations, but this knowledge is unequally shared across languages. Last, we explain how large PLMs despite exhibiting notions of geographical proximity, over-amplify geopolitical favouritism at inference time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源