论文标题

在线平台和同质下的公平暴露问题

Online Platforms and the Fair Exposure Problem Under Homophily

论文作者

Schoeffer, Jakob, Ritchie, Alexander, Naggita, Keziah, Monachou, Faidra, Finocchiaro, Jessie, Juarez, Marc

论文摘要

随着政治极端主义日益增加,在线平台因促进两极分化而受到批评。一系列批评的重点是回声室以及这些平台为用户提供的建议内容。在这项工作中,我们介绍了公平的暴露问题:鉴于平台的干预能力有限,目标是通过与过去的公平学说相似的约束,在两组用户之间在两组用户之间实施平衡(例如新闻文章)。群体的特征是不同的隶属关系(例如政治观点),并且对内容有不同的偏好。我们开发了一个风格化的框架,该框架在同质性下对组内和组间内容传播进行建模,并且我们将平台的决定作为优化问题,旨在在公平限制下,旨在最大程度地提高用户参与度。我们的主要公平概念要求每个小组看到他们喜欢和非偏爱内容的混合,从而鼓励信息多样性。促进此类信息多样性通常被视为理想,并且是摆脱有害回声室的潜在手段。我们研究了公平性不足和公平意识问题的解决方案。我们证明,一种公平的方法不可避免地会导致平台的群体均匀靶向。仅通过施加公平限制来部分缓解这一点:我们表明,存在最佳的公平感知解决方案,这些解决方案针对一个具有不同类型内容的组,而另一组只有一种类型,而一种类型不一定是该组最受欢迎的。最后,使用带有现实数据的模拟,我们研究了系统动力学并量化公平的价格。

In the wake of increasing political extremism, online platforms have been criticized for contributing to polarization. One line of criticism has focused on echo chambers and the recommended content served to users by these platforms. In this work, we introduce the fair exposure problem: given limited intervention power of the platform, the goal is to enforce balance in the spread of content (e.g., news articles) among two groups of users through constraints similar to those imposed by the Fairness Doctrine in the United States in the past. Groups are characterized by different affiliations (e.g., political views) and have different preferences for content. We develop a stylized framework that models intra- and intergroup content propagation under homophily, and we formulate the platform's decision as an optimization problem that aims at maximizing user engagement, potentially under fairness constraints. Our main notion of fairness requires that each group see a mixture of their preferred and non-preferred content, encouraging information diversity. Promoting such information diversity is often viewed as desirable and a potential means for breaking out of harmful echo chambers. We study the solutions to both the fairness-agnostic and fairness-aware problems. We prove that a fairness-agnostic approach inevitably leads to group-homogeneous targeting by the platform. This is only partially mitigated by imposing fairness constraints: we show that there exist optimal fairness-aware solutions which target one group with different types of content and the other group with only one type that is not necessarily the group's most preferred. Finally, using simulations with real-world data, we study the system dynamics and quantify the price of fairness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源