论文标题
跨模式食品检索的视觉和结构性训练
Vision and Structured-Language Pretraining for Cross-Modal Food Retrieval
论文作者
论文摘要
视觉审计(VLP)和基础模型一直是在一般基准上实现SOTA性能的首选食谱。但是,利用这些强大的技术来进行更复杂的视觉语言任务(例如烹饪应用程序,具有更结构化的输入数据)仍然很少研究。在这项工作中,我们建议将这些技术用于基于结构化的计算美食任务。我们的策略(称为VLPCook)首先将现有的图像文本对转换为图像和结构化文本对。这允许使用适用于所得数据集的分支数据的VLP目标预处理我们的VLPCOOK模型,然后在下游计算烹饪任务上进行填充。在填充过程中,我们还丰富了视觉编码器,利用验证的基础模型(例如剪辑)提供本地和全球文本上下文。 VLPCOOK在大型配方1M数据集上的任务上以显着的边距(+3.3回忆@1个绝对改进)优于当前SOTA。我们对VLP进行进一步的实验,以验证其重要性,尤其是在食谱1M+数据集上。最后,我们验证了对其他任务(即食物识别)和具有结构化文本的域的概括,例如ROCO数据集上的医疗领域。该代码可在此处找到:https://github.com/mshukor/vlpcook
Vision-Language Pretraining (VLP) and Foundation models have been the go-to recipe for achieving SoTA performance on general benchmarks. However, leveraging these powerful techniques for more complex vision-language tasks, such as cooking applications, with more structured input data, is still little investigated. In this work, we propose to leverage these techniques for structured-text based computational cuisine tasks. Our strategy, dubbed VLPCook, first transforms existing image-text pairs to image and structured-text pairs. This allows to pretrain our VLPCook model using VLP objectives adapted to the strutured data of the resulting datasets, then finetuning it on downstream computational cooking tasks. During finetuning, we also enrich the visual encoder, leveraging pretrained foundation models (e.g. CLIP) to provide local and global textual context. VLPCook outperforms current SoTA by a significant margin (+3.3 Recall@1 absolute improvement) on the task of Cross-Modal Food Retrieval on the large Recipe1M dataset. We conduct further experiments on VLP to validate their importance, especially on the Recipe1M+ dataset. Finally, we validate the generalization of the approach to other tasks (i.e, Food Recognition) and domains with structured text such as the Medical domain on the ROCO dataset. The code is available here: https://github.com/mshukor/VLPCook