论文标题
主题覆盖方法评估主题模型
A Topic Coverage Approach to Evaluation of Topic Models
论文作者
论文摘要
主题模型是从大量文本文档中广泛使用的无监督模型 - 可以学习主题的单词和文档列表。当主题模型用于在文本集中发现主题时,自然出现的问题是模型诱导的主题与分析师感兴趣的主题相对应。在本文中,我们基于衡量主题覆盖范围的主题模型评估并将其扩展到迄今为止忽略的方法 - 计算匹配的模型主题,并具有一组参考主题,该主题有望发现。该方法非常适合分析模型在主题发现中的性能以及对主题模型和模型质量度量的大规模分析。在一系列实验中,我们提出了覆盖范围的新措施,并在两个不同类型的主题模型中提出了对主题发现兴趣的不同类型的主题模型。实验包括对模型质量的评估,对不同主题类别的覆盖范围的分析以及对主题模型评估的覆盖范围和其他方法之间关系的分析。本文贡献了一种新的监督覆盖范围,也是第一个无监督的覆盖措施。受监督的措施达到了与人类一致的相匹配的主题匹配精度。无监督的措施与受监督的措施高度相关(Spearman的$ρ\ geq 0.95 $)。其他贡献包括对主题模型和不同模型评估方法的见解,以及促进未来关于主题覆盖的研究的数据集和代码。
Topic models are widely used unsupervised models capable of learning topics - weighted lists of words and documents - from large collections of text documents. When topic models are used for discovery of topics in text collections, a question that arises naturally is how well the model-induced topics correspond to topics of interest to the analyst. In this paper we revisit and extend a so far neglected approach to topic model evaluation based on measuring topic coverage - computationally matching model topics with a set of reference topics that models are expected to uncover. The approach is well suited for analyzing models' performance in topic discovery and for large-scale analysis of both topic models and measures of model quality. We propose new measures of coverage and evaluate, in a series of experiments, different types of topic models on two distinct text domains for which interest for topic discovery exists. The experiments include evaluation of model quality, analysis of coverage of distinct topic categories, and the analysis of the relationship between coverage and other methods of topic model evaluation. The paper contributes a new supervised measure of coverage, and the first unsupervised measure of coverage. The supervised measure achieves topic matching accuracy close to human agreement. The unsupervised measure correlates highly with the supervised one (Spearman's $ρ\geq 0.95$). Other contributions include insights into both topic models and different methods of model evaluation, and the datasets and code for facilitating future research on topic coverage.