论文标题
新闻阅读的可行威胁:使用自然语言模型产生有偏见的新闻
Viable Threat on News Reading: Generating Biased News Using Natural Language Models
论文作者
论文摘要
自然语言产生的最新进展引起了严重的关注。高性能语言模型被广泛用于语言生成任务,因为它们能够产生流利而有意义的句子。这些模型已经被用来创建假新闻。他们也可以被利用以产生有偏见的新闻,然后可以用来攻击新闻聚合器以改变读者的行为并影响其偏见。在本文中,我们使用威胁模型来证明公开可用的语言模型可以可靠地基于输入原始新闻来生成有偏见的新闻内容。我们还表明,可以使用可控制的文本生成生成大量高质量的偏见新闻文章。对80名参与者进行的主观评估表明,产生的有偏见的新闻通常会说流利,而与24名参与者进行偏见评估表明,在生成的文章中通常很明显偏见(左右),并且可以轻松识别。
Recent advancements in natural language generation has raised serious concerns. High-performance language models are widely used for language generation tasks because they are able to produce fluent and meaningful sentences. These models are already being used to create fake news. They can also be exploited to generate biased news, which can then be used to attack news aggregators to change their reader's behavior and influence their bias. In this paper, we use a threat model to demonstrate that the publicly available language models can reliably generate biased news content based on an input original news. We also show that a large number of high-quality biased news articles can be generated using controllable text generation. A subjective evaluation with 80 participants demonstrated that the generated biased news is generally fluent, and a bias evaluation with 24 participants demonstrated that the bias (left or right) is usually evident in the generated articles and can be easily identified.