论文标题

扩展逻辑解释的网络到文本分类

Extending Logic Explained Networks to Text Classification

论文作者

Jain, Rishabh, Ciravegna, Gabriele, Barbiero, Pietro, Giannini, Francesco, Buffelli, Davide, Lio, Pietro

论文摘要

最近,逻辑解释的网络(镜头)被提出为可解释的逐设计神经模型,为其预测提供了逻辑解释。但是,这些模型仅应用于视觉和表格数据,它们主要有利于全球解释的产生,而当地的解释往往是嘈杂的和冗长的。由于这些原因,我们建议LENP,通过扰动输入单词来改善本地解释,并在文本分类中对其进行测试。我们的结果表明,(i)LENP在敏感性和忠诚方面提供了比石灰更好的局部解释,并且(ii)逻辑解释比人类调查所证明的lime提供的特征评分更有用和用户友好。

Recently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions. However, these models have only been applied to vision and tabular data, and they mostly favour the generation of global explanations, while local ones tend to be noisy and verbose. For these reasons, we propose LENp, improving local explanations by perturbing input words, and we test it on text classification. Our results show that (i) LENp provides better local explanations than LIME in terms of sensitivity and faithfulness, and (ii) logic explanations are more useful and user-friendly than feature scoring provided by LIME as attested by a human survey.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源