论文标题
实际上,凯皮:迈向可解释的互动医学图像分类
CAIPI in Practice: Towards Explainable Interactive Medical Image Classification
论文作者
论文摘要
如果医生无法向您解释自己的决定,您会信任他们吗?在过去的十年中,使用机器学习的医学诊断非常重要。但是,没有进一步的增强,许多最新的机器学习方法不适合医疗应用。最重要的原因是数据集质量不足和机器学习算法(例如深度学习模型)的黑盒行为。因此,最终用户无法纠正模型的决策和相应的解释。后者对于医疗领域中机器学习的可信度至关重要。研究字段可解释互动机器学习搜索解决这两个缺点的方法。本文扩展了可解释的和交互式的CAIPI算法,并提供了一个界面来简化图像分类的人类在环境中。该接口使最终用户(1)能够调查和(2)纠正模型的预测和说明,以及(3)影响数据集质量。在Caipi优化仅通过迭代而进行一次反例,该模型的准确性为97.48 \%$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ 95.02 \%$ $ $ $ $。这种准确性大致等于最新的深度学习优化程序。此外,Caipi将标签工作减少了约80美元\%$。
Would you trust physicians if they cannot explain their decisions to you? Medical diagnostics using machine learning gained enormously in importance within the last decade. However, without further enhancements many state-of-the-art machine learning methods are not suitable for medical application. The most important reasons are insufficient data set quality and the black-box behavior of machine learning algorithms such as Deep Learning models. Consequently, end-users cannot correct the model's decisions and the corresponding explanations. The latter is crucial for the trustworthiness of machine learning in the medical domain. The research field explainable interactive machine learning searches for methods that address both shortcomings. This paper extends the explainable and interactive CAIPI algorithm and provides an interface to simplify human-in-the-loop approaches for image classification. The interface enables the end-user (1) to investigate and (2) to correct the model's prediction and explanation, and (3) to influence the data set quality. After CAIPI optimization with only a single counterexample per iteration, the model achieves an accuracy of $97.48\%$ on the Medical MNIST and $95.02\%$ on the Fashion MNIST. This accuracy is approximately equal to state-of-the-art Deep Learning optimization procedures. Besides, CAIPI reduces the labeling effort by approximately $80\%$.