论文标题

使用解释性设计物理意识的CNN来解决地下逆问题

Using explainability to design physics-aware CNNs for solving subsurface inverse problems

论文作者

Crocker, Jodie, Kumar, Krishna, Cox, Brady R.

论文摘要

我们提出了一种使用解释性技术来设计物理感知神经网络的新方法。我们通过开发卷积神经网络(CNN)来解决浅层地下成像的反问题来证明我们的方法。尽管CNN近年来在许多领域中广受欢迎,但CNN的发展仍然是一种艺术,因为对于选择超级参数的选择可以产生最佳网络。虽然优化算法可以自动选择超参数,但这些方法着重于具有高预测精度的网络,同时忽略了模型解释性(描述性准确性)。但是,可解释的人工智能(XAI)领域通过提供工具,可以通过提供工具来评估神经网络的内部逻辑来解决模型解释性。在这项研究中,我们使用可解释的方法得分板和深层塑形来选择诸如内核大小和网络深度等超参数,以开发出一种物理感知的CNN进行浅层地下成像。我们从一个相对深的编码器码头网络开始,该网络使用表面波分散图像作为输入,并生成2D剪切波速度地下图像作为输出。通过模型解释,我们最终发现,使用两个卷积层的浅CNN具有非典型内核大小为3x1的卷积层可相当的预测准确性,但描述精度提高。我们还表明,可以使用解释性方法来评估网络的复杂性和决策。我们认为,该方法可用于开发具有高预测精度的神经网络,同时还提供了固有的解释性。

We present a novel method of using explainability techniques to design physics-aware neural networks. We demonstrate our approach by developing a convolutional neural network (CNN) for solving an inverse problem for shallow subsurface imaging. Although CNNs have gained popularity in recent years across many fields, the development of CNNs remains an art, as there are no clear guidelines regarding the selection of hyperparameters that will yield the best network. While optimization algorithms may be used to select hyperparameters automatically, these methods focus on developing networks with high predictive accuracy while disregarding model explainability (descriptive accuracy). However, the field of Explainable Artificial Intelligence (XAI) addresses the absence of model explainability by providing tools that allow developers to evaluate the internal logic of neural networks. In this study, we use the explainability methods Score-CAM and Deep SHAP to select hyperparameters, such as kernel sizes and network depth, to develop a physics-aware CNN for shallow subsurface imaging. We begin with a relatively deep Encoder-Decoder network, which uses surface wave dispersion images as inputs and generates 2D shear wave velocity subsurface images as outputs. Through model explanations, we ultimately find that a shallow CNN using two convolutional layers with an atypical kernel size of 3x1 yields comparable predictive accuracy but with increased descriptive accuracy. We also show that explainability methods can be used to evaluate the network's complexity and decision-making. We believe this method can be used to develop neural networks with high predictive accuracy while also providing inherent explainability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源