论文标题

tvgp-vae:张量变量高斯流程先前的自动编码器

tvGP-VAE: Tensor-variate Gaussian Process Prior Variational Autoencoder

论文作者

Campbell, Alex, Liò, Pietro

论文摘要

变分自动编码器(VAE)是一类强大的深层生成潜在变量模型,用于在高维数据上学习无监督的表示。为了确保计算障碍性,通常使用单变量标准高斯先验和平均场高斯后部分布实现VAE。这导致了矢量值的潜在变量,该变量与原始数据结构不可知,该变量可能在多个维度之间和内部高度相关。我们提出了对VAE框架的张量变量扩展,即张量变量高斯工艺先验变异自动编码器(TVGP-VAE),该过程取代了用张量比可variate-Gaussian流程的标准单变量高斯先验和后验分布。 TVGP-VAE能够通过在张量值的潜在变量的尺寸上使用内核函数来明确建模相关结构。以时空相关的图像时间序列为例,我们表明,在潜在空间中明确表示的相关结构的选择对重建方面对模型性能产生了重大影响。

Variational autoencoders (VAEs) are a powerful class of deep generative latent variable model for unsupervised representation learning on high-dimensional data. To ensure computational tractability, VAEs are often implemented with a univariate standard Gaussian prior and a mean-field Gaussian variational posterior distribution. This results in a vector-valued latent variables that are agnostic to the original data structure which might be highly correlated across and within multiple dimensions. We propose a tensor-variate extension to the VAE framework, the tensor-variate Gaussian process prior variational autoencoder (tvGP-VAE), which replaces the standard univariate Gaussian prior and posterior distributions with tensor-variate Gaussian processes. The tvGP-VAE is able to explicitly model correlation structures via the use of kernel functions over the dimensions of tensor-valued latent variables. Using spatiotemporally correlated image time series as an example, we show that the choice of which correlation structures to explicitly represent in the latent space has a significant impact on model performance in terms of reconstruction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源