论文标题
在基于嵌入的神经主题模型中以均匀性和明确的嵌入正规化方式更好地理解
Towards Better Understanding with Uniformity and Explicit Regularization of Embeddings in Embedding-based Neural Topic Models
论文作者
论文摘要
基于嵌入的神经主题模型可以通过将它们嵌入均匀的特征空间来明确表示单词和主题,从而显示出更高的解释性。但是,没有针对嵌入的训练的明确限制,从而导致更大的优化空间。此外,仍然缺乏对嵌入的变化以及对模型性能的影响的清晰描述。在本文中,我们提出了一个嵌入的正规神经主题模型,该模型应用于单词嵌入和主题嵌入的特殊设计的培训约束,以减少参数的优化空间。为了揭示嵌入的变化和角色,我们将\ textbf {均匀性}引入基于嵌入的神经主题模型中,作为嵌入空间的评估指标。在此基础上,我们描述了嵌入在训练过程中如何通过嵌入的均匀性变化而变化。此外,我们通过消融研究证明了基于嵌入的神经主题模型中嵌入的变化的影响。在两个主流数据集上的实验结果表明,我们的模型在主题质量和文档建模之间的和谐方面显着优于基线模型。这项工作是利用统一性来探索基于嵌入的神经主题模型嵌入的变化及其对模型性能的影响,从而最大程度地了解了这项工作。
Embedding-based neural topic models could explicitly represent words and topics by embedding them to a homogeneous feature space, which shows higher interpretability. However, there are no explicit constraints for the training of embeddings, leading to a larger optimization space. Also, a clear description of the changes in embeddings and the impact on model performance is still lacking. In this paper, we propose an embedding regularized neural topic model, which applies the specially designed training constraints on word embedding and topic embedding to reduce the optimization space of parameters. To reveal the changes and roles of embeddings, we introduce \textbf{uniformity} into the embedding-based neural topic model as the evaluation metric of embedding space. On this basis, we describe how embeddings tend to change during training via the changes in the uniformity of embeddings. Furthermore, we demonstrate the impact of changes in embeddings in embedding-based neural topic models through ablation studies. The results of experiments on two mainstream datasets indicate that our model significantly outperforms baseline models in terms of the harmony between topic quality and document modeling. This work is the first attempt to exploit uniformity to explore changes in embeddings of embedding-based neural topic models and their impact on model performance to the best of our knowledge.