论文标题
谈论大语言模型
Talking About Large Language Models
论文作者
论文摘要
得益于人工智能的快速进步,我们进入了一个以有趣方式相交的技术和哲学的时代。直接坐在此交叉路口的中心是大语言模型(LLM)。模仿人类语言越熟练的LLM,我们就会对拟人化越脆弱,而不是看到它们所嵌入的系统的人性化的含义比实际。在描述这些系统时,使用哲学上加载的术语的自然倾向来扩大这种趋势。为了减轻这一趋势,本文提倡一再退后一步的做法,以提醒自己LLM以及它们形成的系统的实际工作方式。希望的是,提高的科学精确度将鼓励在围绕田间和公共领域的人工智能论述中更加哲学上的细微差别。
Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.