论文标题
关于“ AI。锁定问题”的案例报告:现代NLP的社会问题
A Case Report On The "A.I. Locked-In Problem": social concerns with modern NLP
论文作者
论文摘要
现代的NLP模型比其前辈更好。经常性的神经网络(RNN),尤其是长期术语内存(LSTM)功能使代理可以更好地存储和使用有关语义内容的信息,这种趋势在变压器模型中变得更加明显。众所周知,大型语言模型(LLM)(例如Openai的GPT-3)能够构建和遵循叙事,从而使系统能够在旅途中采用角色,适应它们并在对话中进行播放。但是,对GPT-3进行的实际实验表明,这些现代的NLP系统存在一个反复出现的问题,即它们可以在叙述中“陷入困境”,以便进一步的对话,及时的执行或命令变得徒劳。这在这里被称为“锁定问题”,以实验案例报告为例,然后是伴随此问题的实用和社会问题。
Modern NLP models are becoming better conversational agents than their predecessors. Recurrent Neural Networks (RNNs) and especially Long-Short Term Memory (LSTM) features allow the agent to better store and use information about semantic content, a trend that has become even more pronounced with the Transformer Models. Large Language Models (LLMs) such as GPT-3 by OpenAI have become known to be able to construct and follow a narrative, which enables the system to adopt personas on the go, adapt them and play along in conversational stories. However, practical experimentation with GPT-3 shows that there is a recurring problem with these modern NLP systems, namely that they can "get stuck" in the narrative so that further conversations, prompt executions or commands become futile. This is here referred to as the "Locked-In Problem" and is exemplified with an experimental case report, followed by practical and social concerns that are accompanied with this problem.