论文标题
分解提示:解决复杂任务的模块化方法
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
论文作者
论文摘要
很少有射击提示是使用大型语言模型(LLMS)来解决各种任务的一种令人惊讶的强大方法。但是,随着任务复杂性的增加或任务本身的单个推理步骤本身很难学习,尤其是嵌入更复杂的任务时,这种方法会挣扎。为了解决这个问题,我们提出了分解的提示,这是一种通过将其分解(通过提示)分解为更简单的子任务来求解复杂任务的新方法,这些子任务可以委派给了专门针对这些子任务的基于提示的LLMS库。这种模块化结构允许每个提示对其特定子任务进行优化,如有必要,可以进一步分解,甚至可以根据需要轻松替换更有效的提示,训练有素的模型或符号功能。我们表明,分解提示的灵活性和模块化使其在使用GPT3的几次提示上均超过了事先工作。在符号推理任务上,我们可以进一步分解子任务,而该子任务难以解决更简单的可解决子任务。当复杂性来自输入长度时,我们可以将任务递归分解为相同的任务,但输入较小。我们还评估了我们在文本多步理学任务上的方法:在长篇小说多跳QA任务上,我们可以通过单独的子任务提示更有效地教授子任务;在开放域的多跳质量检查中,我们可以将符号信息检索纳入分解框架中,从而改善了这两个任务的性能。数据集,代码和提示可在https://github.com/allenai/decomp上找到。
Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.