论文标题

用于数据有效视力语言对齐的课程学习

Curriculum Learning for Data-Efficient Vision-Language Alignment

论文作者

Srinivasan, Tejas, Ren, Xiang, Thomason, Jesse

论文摘要

使用对比度学习将图像和文本编码从头开始需要大量配对的图像文本数据。我们通过使用少量的配对数据来对齐单独训练的语言和视觉表示模型来减轻这种需求,并使用课程学习算法增强,以学习细粒度的视觉语言对齐。 Tonics(接受本体知识的对比采样训练)最初采样了Minibatches,其图像文本对包含各种各样的对象,以学习对象级别的对齐,并逐步采样了MiniBatch,其中所有图像text对都包含相同的对象,以学习相同的对象来学习较精细的上下文对齐。使用托料在下游零拍摄图像检索上相互培训的预训练的BERT和VINVL模型,而使用少于1%的训练数据。

Aligning image and text encoders from scratch using contrastive learning requires large amounts of paired image-text data. We alleviate this need by aligning individually pre-trained language and vision representation models using a much smaller amount of paired data, augmented with a curriculum learning algorithm to learn fine-grained vision-language alignments. TOnICS (Training with Ontology-Informed Contrastive Sampling) initially samples minibatches whose image-text pairs contain a wide variety of objects to learn object-level alignment, and progressively samples minibatches where all image-text pairs contain the same object to learn finer-grained contextual alignment. Aligning pre-trained BERT and VinVL models to each other using TOnICS outperforms CLIP on downstream zero-shot image retrieval while using less than 1% as much training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源