论文标题

迈向正式XAI:正式的神经网络的最小解释

Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks

论文作者

Bassan, Shahaf, Katz, Guy

论文摘要

随着机器学习的快速增长,现在在众多领域中使用了深层神经网络(DNN)。不幸的是,DNN是“黑盒子”,不能被人类解释,这在安全至关重要的系统中是一个实质性的关注。为了减轻此问题,研究人员已经开始研究可解释的AI(XAI)方法,这些方法可以识别出输入特征的子集,这些特征是DNN决定给定输入的原因。大多数现有的技术都是启发式方法,无法保证所提供的解释的正确性。相反,最近和令人兴奋的尝试表明,正式方法可用于生成可证明的正确解释。尽管这些方法是合理的,但基本验证问题的计算复杂性限制了它们的可扩展性。而且它们产生的解释有时可能过于复杂。在这里,我们提出了一种解决这些局限性的新方法。我们(1)提出了一种基于有效的,基于验证的方法来查找最小解释,该方法构成了全球最小解释的可证明的近似; (2)显示DNN验证如何有助于在最佳解释中计算下限和上限; (3)提出启发式方法,可显着提高验证过程的可扩展性; (4)建议使用捆绑包,这使我们得出了更简洁和可解释的解释。我们的评估表明,我们的方法显着胜过最先进的技术,并产生对人类更有用的解释。因此,我们认为这项工作是利用验证技术生产更可靠和可理解的DNN的一步。

With the rapid growth of machine learning, deep neural networks (DNNs) are now being used in numerous domains. Unfortunately, DNNs are "black-boxes", and cannot be interpreted by humans, which is a substantial concern in safety-critical systems. To mitigate this issue, researchers have begun working on explainable AI (XAI) methods, which can identify a subset of input features that are the cause of a DNN's decision for a given input. Most existing techniques are heuristic, and cannot guarantee the correctness of the explanation provided. In contrast, recent and exciting attempts have shown that formal methods can be used to generate provably correct explanations. Although these methods are sound, the computational complexity of the underlying verification problem limits their scalability; and the explanations they produce might sometimes be overly complex. Here, we propose a novel approach to tackle these limitations. We (1) suggest an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation; (2) show how DNN verification can assist in calculating lower and upper bounds on the optimal explanation; (3) propose heuristics that significantly improve the scalability of the verification process; and (4) suggest the use of bundles, which allows us to arrive at more succinct and interpretable explanations. Our evaluation shows that our approach significantly outperforms state-of-the-art techniques, and produces explanations that are more useful to humans. We thus regard this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源