论文标题

神经因果关系的摊销学习

Amortized learning of neural causal representations

论文作者

Ke, Nan Rosemary, Wang, Jane. X., Mitrovic, Jovana, Szummer, Martin, Rezende, Danilo J.

论文摘要

因果模型可以在所有干预措施下紧凑有效地编码数据生成过程,因此在分布变化下可能会更好地概括。这些模型通常被表示为贝叶斯网络,并且通过变量数量来缩放它们。此外,这些方法无法利用先前学习的知识来帮助学习新的因果模型。为了应对这些挑战,我们代表了一种新型算法,称为\ textit {Causal关系网络}(CRN),用于使用神经网络学习因果模型。 CRN代表使用连续表示的因果模型,因此随变量的数量可以更好地缩放。这些模型还采用了先前学习的信息,以促进学习新的因果模型。最后,我们提出了一个基于解码的度量,以评估具有连续表示的因果模型。我们测试有关合成数据的方法,可实现高准确性和快速适应以前看不见的因果模型。

Causal models can compactly and efficiently encode the data-generating process under all interventions and hence may generalize better under changes in distribution. These models are often represented as Bayesian networks and learning them scales poorly with the number of variables. Moreover, these approaches cannot leverage previously learned knowledge to help with learning new causal models. In order to tackle these challenges, we represent a novel algorithm called \textit{causal relational networks} (CRN) for learning causal models using neural networks. The CRN represent causal models using continuous representations and hence could scale much better with the number of variables. These models also take in previously learned information to facilitate learning of new causal models. Finally, we propose a decoding-based metric to evaluate causal models with continuous representations. We test our method on synthetic data achieving high accuracy and quick adaptation to previously unseen causal models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源