论文标题
对宇宙学应用程序的转变重要性
Transformation Importance with Applications to Cosmology
论文作者
论文摘要
机器学习是科学发现,知识产生和人工智能的新可能性的核心。它对这些领域的潜在好处需要超越预测精度并专注于解释性。特别是,许多科学问题需要在特定于域的可解释特征空间(例如频域)中进行解释,而对原始特征(例如像素空间)的归因可能是无法理解的,甚至是误导性的。为了应对这一挑战,我们提出了Trim(转换重要性),这是一种新颖的方法,将重要性归因于转换空间中的特征,并且可以在事后将事后应用于完全训练的模型。 TRIM是由使用深神经网络(DNN)在模拟数据上的宇宙学参数估计问题进行的,但通常适用于域/模型,并且可以与任何局部解释方法结合使用。在我们的宇宙学示例中,将TRIM与上下文分解结合在一起显示了有希望的结果,以识别DNN使用哪些频率,从而帮助宇宙学家理解并验证该模型学习了适当的物理特征而不是模拟工件。
Machine learning lies at the heart of new possibilities for scientific discovery, knowledge generation, and artificial intelligence. Its potential benefits to these fields requires going beyond predictive accuracy and focusing on interpretability. In particular, many scientific problems require interpretations in a domain-specific interpretable feature space (e.g. the frequency domain) whereas attributions to the raw features (e.g. the pixel space) may be unintelligible or even misleading. To address this challenge, we propose TRIM (TRansformation IMportance), a novel approach which attributes importances to features in a transformed space and can be applied post-hoc to a fully trained model. TRIM is motivated by a cosmological parameter estimation problem using deep neural networks (DNNs) on simulated data, but it is generally applicable across domains/models and can be combined with any local interpretation method. In our cosmology example, combining TRIM with contextual decomposition shows promising results for identifying which frequencies a DNN uses, helping cosmologists to understand and validate that the model learns appropriate physical features rather than simulation artifacts.