论文标题

离线检索评估没有评估指标

Offline Retrieval Evaluation Without Evaluation Metrics

论文作者

Diaz, Fernando, Ferraro, Andres

论文摘要

信息检索和建议的离线评估传统上是将排名质量提炼成标量指标的质量,例如平均精度或归一化折扣累积增益。我们可以使用此指标比较相同请求的多个系统的性能。尽管评估指标提供了一个方便的系统性能摘要,但它们也将用户的细微差异崩溃了一个数字,并且可以携带有关用户行为和实用程序在检索方案中不支持的假设。我们提出了召回对方偏好(RPP),这是一种基于直接计算排名列表之间偏好的无度评估方法。 RPP模拟每个查询的多个用户亚群,并在这些伪群中进行比较。我们在多个搜索和建议任务中进行的结果表明,RPP可以大大提高歧视功率,同时与现有指标良好相关,并且同样强大地与不完整的数据相关。

Offline evaluation of information retrieval and recommendation has traditionally focused on distilling the quality of a ranking into a scalar metric such as average precision or normalized discounted cumulative gain. We can use this metric to compare the performance of multiple systems for the same request. Although evaluation metrics provide a convenient summary of system performance, they also collapse subtle differences across users into a single number and can carry assumptions about user behavior and utility not supported across retrieval scenarios. We propose recall-paired preference (RPP), a metric-free evaluation method based on directly computing a preference between ranked lists. RPP simulates multiple user subpopulations per query and compares systems across these pseudo-populations. Our results across multiple search and recommendation tasks demonstrate that RPP substantially improves discriminative power while correlating well with existing metrics and being equally robust to incomplete data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源