论文标题
巴利人:一个共同刻度的多语言图像模型
PaLI: A Jointly-Scaled Multilingual Language-Image Model
论文作者
论文摘要
有效的缩放和灵活的任务接口使大型语言模型能够在许多任务上脱颖而出。我们提出了Pali(Pathways语言和图像模型),该模型将这种方法扩展到语言和视觉的联合建模。帕利(Pali)基于视觉和文本输入生成文本,并且此接口以许多语言执行许多视觉,语言和多模式任务。为了培训帕利,我们利用大型预训练的编码器折 - 码头模型和视觉变压器(VITS)。这使我们能够利用其现有能力,并利用培训它们的大量成本。我们发现,视觉和语言组成部分的联合缩放很重要。由于现有的语言变压器比其视觉对应物大得多,因此我们训练一个大型的40亿参数VIT(VIT-E),以量化甚至大容量视觉模型中的好处。为了训练帕利,我们基于一个新的图像文本训练集,其中包含10B图像和文本,以100多种语言创建了大型的多语言组合。帕利(Pali)在多种视觉和语言任务(例如字幕,视觉询问,场景文本理解)中实现了最新的,同时保留了简单,模块化和可扩展的设计。
Effective scaling and a flexible task interface enable large language models to excel at many tasks. We present PaLI (Pathways Language and Image model), a model that extends this approach to the joint modeling of language and vision. PaLI generates text based on visual and textual inputs, and with this interface performs many vision, language, and multimodal tasks, in many languages. To train PaLI, we make use of large pre-trained encoder-decoder language models and Vision Transformers (ViTs). This allows us to capitalize on their existing capabilities and leverage the substantial cost of training them. We find that joint scaling of the vision and language components is important. Since existing Transformers for language are much larger than their vision counterparts, we train a large, 4-billion parameter ViT (ViT-e) to quantify the benefits from even larger-capacity vision models. To train PaLI, we create a large multilingual mix of pretraining tasks, based on a new image-text training set containing 10B images and texts in over 100 languages. PaLI achieves state-of-the-art in multiple vision and language tasks (such as captioning, visual question-answering, scene-text understanding), while retaining a simple, modular, and scalable design.