论文标题
监督学习与产品搜索方法匹配的比较
A Comparison of Supervised Learning to Match Methods for Product Search
论文作者
论文摘要
词汇差距是信息检索(IR)的核心挑战。在诸如产品搜索之类的电子商务应用程序中,据报道,词汇差距比在IR中更传统的应用领域(例如新闻搜索或Web搜索)中更大的挑战。随着最近的学习方法在弥合这些传统IR领域的词汇鸿沟方面取得了重要的进步,我们在产品搜索的背景下研究了它们的潜力。在本文中,我们提供了有关使用最近的学习来匹配产品搜索方法的见解。我们比较了产品搜索设置中这些方法的有效性和效率,并在两个产品搜索数据集上分析了它们的性能,每个搜索都有50,000个查询。一个是在CIKM 2016的社区基准活动的一部分提供的开放数据集。另一个是从欧洲电子商务平台获得的专有查询日志。进行此比较是为了更好地理解这项任务的首选模型时的权衡。我们发现,(1)专门为短文本匹配而设计的模型,例如MV-LSTM和DRMMTK,在所有实验中始终是前三种方法之一。但是,同时考虑效率和准确性,ARC-I是现实世界用例的首选模型; (2)最先进的基于BERT的模型的性能是平庸的,我们将其归因于文本BERT预先培训的事实与我们在产品搜索中的文本有很大不同。我们还提供了有关可能影响不同类型查询的模型行为的因素,例如检索到的列表的长度和查询复杂性,并讨论我们的发现对电子商务从业人员的含义,以选择良好的表现方法。
The vocabulary gap is a core challenge in information retrieval (IR). In e-commerce applications like product search, the vocabulary gap is reported to be a bigger challenge than in more traditional application areas in IR, such as news search or web search. As recent learning to match methods have made important advances in bridging the vocabulary gap for these traditional IR areas, we investigate their potential in the context of product search. In this paper we provide insights into using recent learning to match methods for product search. We compare both effectiveness and efficiency of these methods in a product search setting and analyze their performance on two product search datasets, with 50,000 queries each. One is an open dataset made available as part of a community benchmark activity at CIKM 2016. The other is a proprietary query log obtained from a European e-commerce platform. This comparison is conducted towards a better understanding of trade-offs in choosing a preferred model for this task. We find that (1) models that have been specifically designed for short text matching, like MV-LSTM and DRMMTKS, are consistently among the top three methods in all experiments; however, taking efficiency and accuracy into account at the same time, ARC-I is the preferred model for real world use cases; and (2) the performance from a state-of-the-art BERT-based model is mediocre, which we attribute to the fact that the text BERT is pre-trained on is very different from the text we have in product search. We also provide insights into factors that can influence model behavior for different types of query, such as the length of retrieved list, and query complexity, and discuss the implications of our findings for e-commerce practitioners, with respect to choosing a well performing method.