论文标题
机器学习模型的本地解释模型的不可知论解释
Local Interpretable Model Agnostic Shap Explanations for machine learning models
论文作者
论文摘要
随着基于人工智能的技术(AI)解决方案和分析计算引擎的发展,机器学习(ML)模型日益变得越来越复杂。这些模型中的大多数通常用作没有用户解释性的黑匣子。如此复杂的ML模型使人们更难理解或信任他们的预测。有多种框架使用可解释的AI(XAI)方法来证明ML模型的解释性和解释性,以使其预测更加值得信赖。在本手稿中,我们提出了一种方法,我们将其定义为局部可解释的模型不可知的塑造解释(Limase)。这项提出的ML解释技术使用石灰范式下的沙普利值来实现以下(a)通过使用本地忠实且可解释的决策树模型来解释任何模型的预测,该模型在该模型上使用该模型来计算Shapley值并提供可视化的解释。 (b)通过绘制几个数据点的局部说明,提供可视上可解释的全局解释。 (c)证明了下二次优化问题的解决方案。 (d)还可以深入了解区域解释e)与使用内核解释器相比,更快的计算。
With the advancement of technology for artificial intelligence (AI) based solutions and analytics compute engines, machine learning (ML) models are getting more complex day by day. Most of these models are generally used as a black box without user interpretability. Such complex ML models make it more difficult for people to understand or trust their predictions. There are variety of frameworks using explainable AI (XAI) methods to demonstrate explainability and interpretability of ML models to make their predictions more trustworthy. In this manuscript, we propose a methodology that we define as Local Interpretable Model Agnostic Shap Explanations (LIMASE). This proposed ML explanation technique uses Shapley values under the LIME paradigm to achieve the following (a) explain prediction of any model by using a locally faithful and interpretable decision tree model on which the Tree Explainer is used to calculate the shapley values and give visually interpretable explanations. (b) provide visually interpretable global explanations by plotting local explanations of several data points. (c) demonstrate solution for the submodular optimization problem. (d) also bring insight into regional interpretation e) faster computation compared to use of kernel explainer.