论文标题

通过基于优化的反事实影响分析来解释医疗图像分类器

Interpreting Medical Image Classifiers by Optimization Based Counterfactual Impact Analysis

论文作者

Major, David, Lenis, Dimitrios, Wimmer, Maria, Sluiter, Gert, Berg, Astrid, Bühler, Katja

论文摘要

自动决策支持系统的临床适用性取决于强大的,良好的分类解释。在这方面,人工神经网络同时达到了班级领先的分数。因此,已经提出了许多方法将图像的显着区域映射到诊断分类。利用启发式方法(例如模糊和噪音),它们倾向于产生弥漫性,有时会产生误导性结果,从而阻碍其一般采用。在这项工作中,我们通过介绍针对医学成像的模型不可知的显着映射框架来克服这些问题。我们用强有力的邻里条件覆盖方法代替了启发式技术,从而避免了解剖学上令人难以置信的人工制品。我们将显着归因作为一项地图质量优化任务,执行受约束和重点的属性。公共乳房X线摄影数据的实验在定量和质量上比现有的最新方法更为精确地定位和更清晰的输送结果。

Clinical applicability of automated decision support systems depends on a robust, well-understood classification interpretation. Artificial neural networks while achieving class-leading scores fall short in this regard. Therefore, numerous approaches have been proposed that map a salient region of an image to a diagnostic classification. Utilizing heuristic methodology, like blurring and noise, they tend to produce diffuse, sometimes misleading results, hindering their general adoption. In this work we overcome these issues by presenting a model agnostic saliency mapping framework tailored to medical imaging. We replace heuristic techniques with a strong neighborhood conditioned inpainting approach, which avoids anatomically implausible artefacts. We formulate saliency attribution as a map-quality optimization task, enforcing constrained and focused attributions. Experiments on public mammography data show quantitatively and qualitatively more precise localization and clearer conveying results than existing state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源