论文标题
通过潜在可变蒸馏来扩展概率电路
Scaling Up Probabilistic Circuits by Latent Variable Distillation
论文作者
论文摘要
概率电路(PCS)是可进行可处理概率模型的统一框架,可支持各种概率查询(例如边际概率)的有效计算。一个关键的挑战是扩展PC以建模大型和高维实际数据集:我们观察到,随着PC的参数数量的增加,它们的性能会立即高原。这种现象表明,现有的优化器无法利用大型PC的全部表达能力。我们建议通过潜在变量蒸馏克服这种瓶颈:我们利用较不可能的但更具表现力的深层生成模型来对PC的潜在变量进行额外的监督。具体而言,我们从基于变压器的生成模型中提取信息,以将值分配给PC的潜在变量,从而为PC优化器提供指导。对图像和语言建模基准测试基准(例如ImageNet和Wikitext-2)的实验表明,与没有潜在可变蒸馏的对应物相比,潜在可变蒸馏显着提高了大型PC的性能。特别是,在图像建模基准上,PC对某些广泛使用的深层生成模型(包括变异自动编码器和基于流量的模型)实现了竞争性能,开辟了可拖动生成模型的新途径。我们的代码可以在https://github.com/ucla-starai/lvd上找到。
Probabilistic Circuits (PCs) are a unified framework for tractable probabilistic models that support efficient computation of various probabilistic queries (e.g., marginal probabilities). One key challenge is to scale PCs to model large and high-dimensional real-world datasets: we observe that as the number of parameters in PCs increases, their performance immediately plateaus. This phenomenon suggests that the existing optimizers fail to exploit the full expressive power of large PCs. We propose to overcome such bottleneck by latent variable distillation: we leverage the less tractable but more expressive deep generative models to provide extra supervision over the latent variables of PCs. Specifically, we extract information from Transformer-based generative models to assign values to latent variables of PCs, providing guidance to PC optimizers. Experiments on both image and language modeling benchmarks (e.g., ImageNet and WikiText-2) show that latent variable distillation substantially boosts the performance of large PCs compared to their counterparts without latent variable distillation. In particular, on the image modeling benchmarks, PCs achieve competitive performance against some of the widely-used deep generative models, including variational autoencoders and flow-based models, opening up new avenues for tractable generative modeling. Our code can be found at https://github.com/UCLA-StarAI/LVD.