论文标题

Latentgan自动编码器:学习解开潜在分布

LatentGAN Autoencoder: Learning Disentangled Latent Distribution

论文作者

Kalwar, Sanket, Aich, Animikh, Dixit, Tanay, Chhabra, Adit

论文摘要

在自动编码器中,编码器通常近似于数据集上的潜在分布,并且解码器使用此博学的潜在分布生成样品。对潜在矢量的控制很少,因为使用随机潜在向量进行生成将导致琐碎的输出。这项工作试图通过使用Latentgan Generator直接学习近似自动编码器的潜在分布并在MNIST,3D椅子和Celeba数据集中显示出有意义的结果来解决此问题,并在MNAIST,3D椅子和Celeba数据集上显示出有意义的结果,成功地使用了一种成功学习的信息理论约束,这些限制已成功地学习以控制自动核编码器潜在的分布。因此,我们的模型在MNIST无监督的图像分类上还达到了2.38的错误率,与Infogan和AAE相比,这更好。

In autoencoder, the encoder generally approximates the latent distribution over the dataset, and the decoder generates samples using this learned latent distribution. There is very little control over the latent vector as using the random latent vector for generation will lead to trivial outputs. This work tries to address this issue by using the LatentGAN generator to directly learn to approximate the latent distribution of the autoencoder and show meaningful results on MNIST, 3D Chair, and CelebA datasets, an additional information-theoretic constrain is used which successfully learns to control autoencoder latent distribution. With this, our model also achieves an error rate of 2.38 on MNIST unsupervised image classification, which is better as compared to InfoGAN and AAE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源