论文标题

差异自动编码器:并非所有失败都是平等的

Variational Auto-Encoder: not all failures are equal

论文作者

Sebag, Michele, Berger, Victor, Sebag, Michèle

论文摘要

我们声称,变异自动编码器的严重失败来源是选择用于观察模型的分配类别。本文的第一个理论和实验贡献是确定,即使在较大的样本中,即使在较大的样本限制中,具有较高的vae的尖锐性不符合vae的敏锐度,因此,如果与数据的敏锐度相匹配。优化离线):自主调整此清晰度,VAE可以动态控制重建损失的优化和潜在压缩之间的权衡。第二个经验贡献是表明对这种权衡的控制如何有助于逃避贫穷的本地优点,类似于模拟的退火时间表。任何人都在对人工数据,MNIST和CELEBA的实验,MNIST和CELEBA的实验中进行支持,显示了清晰度学习如何解决臭名昭著的Vae Blurriness问题。

We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distribution class used for the observation model.A first theoretical and experimental contribution of the paper is to establish that even in the large sample limit with arbitrarily powerful neural architectures and latent space, the VAE failsif the sharpness of the distribution class does not match the scale of the data.Our second claim is that the distribution sharpness must preferably be learned by the VAE (as opposed to, fixed and optimized offline): Autonomously adjusting this sharpness allows the VAE to dynamically control the trade-off between the optimization of the reconstruction loss and the latent compression. A second empirical contribution is to show how the control of this trade-off is instrumental in escaping poor local optima, akin a simulated annealing schedule.Both claims are backed upon experiments on artificial data, MNIST and CelebA, showing how sharpness learning addresses the notorious VAE blurriness issue.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源