论文标题
通过随机变分优化训练混合经典量子分类器
Training Hybrid Classical-Quantum Classifiers via Stochastic Variational Optimization
论文作者
论文摘要
量子机学习已成为近期量子设备的潜在实际应用。在这项工作中,我们研究了两层杂交经典量词分类器,其中实现广义线性模型(QGLM)的第一层量子随机神经元之后是第二经典组合层。通过振幅编码获得第一个,隐藏层的输入,以利用每个神经元的量子数量的量子神经元的粉丝的指数大小。为了促进QGLM的实施,所有权重和激活都是二进制的。尽管此类模型的培训策略的最新技术仅限于详尽的搜索和单神经元观念,类似于单翼的位纤维策略,但这封信介绍了一种随机梯度的量子和经典层次培训,引入了随机层次的随机培训。实验显示了该方法的优势,用于QGLM神经元实现的各种激活功能。
Quantum machine learning has emerged as a potential practical application of near-term quantum devices. In this work, we study a two-layer hybrid classical-quantum classifier in which a first layer of quantum stochastic neurons implementing generalized linear models (QGLMs) is followed by a second classical combining layer. The input to the first, hidden, layer is obtained via amplitude encoding in order to leverage the exponential size of the fan-in of the quantum neurons in the number of qubits per neuron. To facilitate implementation of the QGLMs, all weights and activations are binary. While the state of the art on training strategies for this class of models is limited to exhaustive search and single-neuron perceptron-like bit-flip strategies, this letter introduces a stochastic variational optimization approach that enables the joint training of quantum and classical layers via stochastic gradient descent. Experiments show the advantages of the approach for a variety of activation functions implemented by QGLM neurons.