论文标题

在学习多维离散数据时,量子电路是否比神经网络更好?对实际量子电路生成模型的调查

Are Quantum Circuits Better than Neural Networks at Learning Multi-dimensional Discrete Data? An Investigation into Practical Quantum Circuit Generative Models

论文作者

Zhai, Pengyuan

论文摘要

多层参数化量子电路(MPQC)是否比经典神经网络(NNS)更有表现?如何,为什么以及在哪些方面?在这项工作中,我们调查和发展了与经典NN相关的MPQC的表达能力的直观见解。我们将可用的资源组织到系统的证据中,以了解为什么MPQC能够生成无法经典的概率分布。我们首先表明,瞬时量子多项式电路(IQPC)不太可能在乘法误差内经典地模拟以模拟,然后证明MPQC有效地概括了IQPCS。我们通过数值模拟来支持被调查的主张:以MPQC作为核心体系结构,我们构建了量子生成模型的不同版本,以学习给定的多维,多模式离散数据分布,并显示其优于经典的生成对抗网络(GAN),配备了带有口香糖软化的仿制离散数据。此外,我们解决了实际问题,例如如何有限的样品有效训练量子电路,如何有效计算(量子)梯度以及如何减轻模态崩溃。我们提出并实验验证了一种有效的训练和预定调用方案,以降低输出噪声并减少模态崩溃。作为最初的贡献,我们开发了一种受信息理论措施启发的新型损失函数(MCR损失) - 编码率降低度量指标,该指标具有更具表现力和几何有意义的潜在空间表示形式 - 对模型选择和减轻模式崩溃的有益。在两个设置下,我们在电路参数方面得出了MCR损失的梯度:具有径向基函数(RBF)内核和NN鉴别器,并进行了进行实验以展示其有效性。

Are multi-layer parameterized quantum circuits (MPQCs) more expressive than classical neural networks (NNs)? How, why, and in what aspects? In this work, we survey and develop intuitive insights into the expressive power of MPQCs in relation to classical NNs. We organize available sources into a systematic proof on why MPQCs are able to generate probability distributions that cannot be efficiently simulated classically. We first show that instantaneous quantum polynomial circuits (IQPCs), are unlikely to be simulated classically to within a multiplicative error, and then show that MPQCs efficiently generalize IQPCs. We support the surveyed claims with numerical simulations: with the MPQC as the core architecture, we build different versions of quantum generative models to learn a given multi-dimensional, multi-modal discrete data distribution, and show their superior performances over a classical Generative Adversarial Network (GAN) equipped with the Gumbel Softmax for generating discrete data. In addition, we address practical issues such as how to efficiently train a quantum circuit with only limited samples, how to efficiently calculate the (quantum) gradient, and how to alleviate modal collapse. We propose and experimentally verify an efficient training-and-fine-tuning scheme for lowering the output noise and decreasing modal collapse. As an original contribution, we develop a novel loss function (MCR loss) inspired by an information-theoretical measure -- the coding rate reduction metric, which has a more expressive and geometrically meaningful latent space representations -- beneficial for both model selection and alleviating modal collapse. We derive the gradient of our MCR loss with respect to the circuit parameters under two settings: with the radial basis function (RBF) kernel and with a NN discriminator and conduct experiments to showcase its effectiveness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源