论文标题
YouTube上错误信息滤波器气泡的审核:气泡破裂和最近的行为发生了变化
An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes
论文作者
论文摘要
研究人员已经知道了自适应系统中错误信息滤泡气泡的负面影响。几项研究在YouTube上最突出地研究了用户可以通过从提供的项目中选择错误的选择来陷入错误信息滤泡泡沫的速度。然而,到目前为止,尚无研究调查破坏气泡所需的东西,即恢复泡沫外壳。我们介绍了一项研究,其中预编程的代理(作为YouTube用户的作用)通过观察促进内容的错误信息(针对各种主题)来深入研究错误信息滤泡气泡。然后,通过观看错误信息揭穿内容,代理商试图破坏气泡并达到更平衡的推荐混音。我们记录了代理商遇到的搜索结果和建议,并分析了它们是否存在错误信息。我们的关键发现是,滤泡气泡的破裂是可能的,尽管它在主题上都表现出不同的表现。此外,我们观察到过滤气泡在某些情况下并没有真正出现。我们还与先前的研究进行了直接比较。可悲的是,尽管YouTube最近承诺,但我们并没有发现错误信息的发生。
The negative effects of misinformation filter bubbles in adaptive systems have been known to researchers for some time. Several studies investigated, most prominently on YouTube, how fast a user can get into a misinformation filter bubble simply by selecting wrong choices from the items offered. Yet, no studies so far have investigated what it takes to burst the bubble, i.e., revert the bubble enclosure. We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content (for various topics). Then, by watching misinformation debunking content, the agents try to burst the bubbles and reach more balanced recommendation mixes. We recorded the search results and recommendations, which the agents encountered, and analyzed them for the presence of misinformation. Our key finding is that bursting of a filter bubble is possible, albeit it manifests differently from topic to topic. Moreover, we observe that filter bubbles do not truly appear in some situations. We also draw a direct comparison with a previous study. Sadly, we did not find much improvements in misinformation occurrences, despite recent pledges by YouTube.