论文标题

答案 - 我:多任务开放式视觉视觉问题答案

Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering

论文作者

Piergiovanni, AJ, Li, Wei, Kuo, Weicheng, Saffar, Mohammad, Bertsch, Fred, Angelova, Anelia

论文摘要

我们提出答案 - 我是一个任务意识到的多任务框架,该框架统一了各种答案任务,例如视觉问题回答,视觉效果,视觉推理。与以前使用对比或生成字幕训练的作品相反,我们提出了一种新颖而简单的配方,以预先培训视觉的关节模型,该模型也是多任务。预训练仅使用嘈杂的图像字幕数据,并配制了使用强大的语言编码器和解码器的整个体系结构。我们的结果表明,在各种问答任务中,遗忘的最先进的性能,零拍的概括,稳健性以及竞争性的单任务结果。我们的多任务混合物培训从各种问题意图的任务中学习,因此可以更好地推广,包括零照片视觉语言任务。我们在具有挑战性的多任务和开放式视频计环境以及各种数据集和任务中进行实验,例如VQA2.0,SNLI-VE,NLVR2,GQA。我们观察到,所提出的方法能够推广到看不见的任务,并且更多样化的混合物在已知和新任务中都可以提高精度。

We present Answer-Me, a task-aware multi-task framework which unifies a variety of question answering tasks, such as, visual question answering, visual entailment, visual reasoning. In contrast to previous works using contrastive or generative captioning training, we propose a novel and simple recipe to pre-train a vision-language joint model, which is multi-task as well. The pre-training uses only noisy image captioning data, and is formulated to use the entire architecture end-to-end with both a strong language encoder and decoder. Our results show state-of-the-art performance, zero-shot generalization, robustness to forgetting, and competitive single-task results across a variety of question answering tasks. Our multi-task mixture training learns from tasks of various question intents and thus generalizes better, including on zero-shot vision-language tasks. We conduct experiments in the challenging multi-task and open-vocabulary settings and across a variety of datasets and tasks, such as VQA2.0, SNLI-VE, NLVR2, GQA. We observe that the proposed approach is able to generalize to unseen tasks and that more diverse mixtures lead to higher accuracy in both known and novel tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源