论文标题
情感分析和上下文嵌入和自我注意
Sentiment Analysis with Contextual Embeddings and Self-Attention
论文作者
论文摘要
在自然语言中,单词或短语的预期含义通常是隐性的,并且取决于上下文。在这项工作中,我们提出了一种使用上下文嵌入和自我发挥机制的简单而有效的情感分析方法。三种语言的实验结果,包括形态上丰富的波兰语和德语,表明我们的模型与最先进的模型相当甚至超过了最先进的模型。在所有情况下,都证明了利用上下文嵌入的模型的优势。最后,这项工作旨在作为引入通用的多语言分类器迈出的一步。
In natural language the intended meaning of a word or phrase is often implicit and depends on the context. In this work, we propose a simple yet effective method for sentiment analysis using contextual embeddings and a self-attention mechanism. The experimental results for three languages, including morphologically rich Polish and German, show that our model is comparable to or even outperforms state-of-the-art models. In all cases the superiority of models leveraging contextual embeddings is demonstrated. Finally, this work is intended as a step towards introducing a universal, multilingual sentiment classifier.