论文标题

使黑盒更明亮:解释机器学习算法,用于预测钻探事故

Making the black-box brighter: interpreting machine learning algorithm for forecasting drilling accidents

论文作者

Gurina, Ekaterina, Klyuchnikov, Nikita, Antipova, Ksenia, Koroteev, Dmitry

论文摘要

我们提出了一种方法,用于解释在油气井进行钻探过程中预测事故和异常情况的黑盒警报系统。解释方法旨在向钻探工程师解释事故预测模型的当地行为。解释模型使用Shapley添加性解释分析特征分析,这是通过在钻井事故预测阶段使用的遥测日志的功能袋表示获得的。验证表明,解释模型在70%的召回率时具有15%的精度,并克服了随机基线和多头注意神经网络的度量值。这些结果证明,与最先进的方法相比,开发的解释模型与钻探工程师的解释更好。解释性和功能袋模型的联合性能使钻井工程师可以在特定时刻了解系统决策背后的逻辑,注意突出显示的遥测区域,并相应地提高事故预测警报的信任水平。

We present an approach for interpreting a black-box alarming system for forecasting accidents and anomalies during the drilling of oil and gas wells. The interpretation methodology aims to explain the local behavior of the accident predictive model to drilling engineers. The explanatory model uses Shapley additive explanations analysis of features, obtained through Bag-of-features representation of telemetry logs used during the drilling accident forecasting phase. Validation shows that the explanatory model has 15% precision at 70% recall, and overcomes the metric values of a random baseline and multi-head attention neural network. These results justify that the developed explanatory model is better aligned with explanations of drilling engineers, than the state-of-the-art method. The joint performance of explanatory and Bag-of-features models allows drilling engineers to understand the logic behind the system decisions at the particular moment, pay attention to highlighted telemetry regions, and correspondingly, increase the trust level in the accident forecasting alarms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源