论文标题

dext:检测器解释工具包

DExT: Detector Explanation Toolkit

论文作者

Padmanabhan, Deepan Chakravarthi, Plöger, Paul G., Arriaga, Octavio, Valdenegro-Toro, Matias

论文摘要

由于其高度非线性的内部计算,最先进的对象探测器被视为黑匣子。即使取得了前所未有的检测性能进步,无法解释其产量如何产生的限制它们在安全至关重要的应用中的使用限制。先前的工作未能为边界框和分类决策产生解释,并且通常对各种检测器做出个人解释。在本文中,我们提出了一个开源检测器解释工具包(DEXT),该解释工具包(DEXT)实现了建议的方法,以使用某些基于梯度的解释方法为所有检测器决策生成整体解释。我们建议采用各种多对象可视化方法,以合并图像中检测到的多个对象的解释以及单个图像中的相应检测。定量评估表明,与其他探测器相比,单枪Multibox检测器(SSD)被更忠实地解释,而不论其解释方法如何。定量和以人为中心的评估都确定了带有指导性反向传播(GBP)的SmoothGrad在所有检测器中选定的方法之间提供了更值得信赖的解释。我们预计DEXT将激励从业者通过解释边界框和分类决策来从解释性的角度评估对象探测器。

State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源