论文标题

生成模型的进化变分优化

Evolutionary Variational Optimization of Generative Models

论文作者

Drefs, Jakob, Guiraud, Enrico, Lücke, Jörg

论文摘要

我们结合了两种流行的优化方法来得出生成模型的学习算法:变异优化和进化算法。通过使用截短的后代作为变异分布的家族,可以实现具有离散潜在潜伏期的生成模型的组合。截断后后期的变异参数是潜在状态的集合。通过将这些状态解释为个体的基因组,并使用变分的下限来定义适应性,我们可以应用进化算法来实现变异循环。所使用的变分布非常灵活,我们表明进化算法可以有效,有效地优化变分结合。此外,各种环路通常适用(“黑匣子”),而无需分析推导。为了显示一般的适用性,我们将方法应用于三种生成模型(我们使用嘈杂的或贝叶斯网,二进制稀疏编码和尖峰和slab稀疏编码)。为了证明新型变异方法的有效性和效率,我们使用图像降解和钻孔的标准竞争基准。这些基准允许对包括概率方法,深层确定性和生成网络以及非本地图像处理方法在内的多种方法进行定量比较。在“零射”学习类别(仅使用损坏的图像用于培训时),我们观察到了进化的变分算法,以显着改善许多基准设置中的最先进。对于一个众所周知的介绍基准,我们还观察到了所有类别算法的最先进的性能,尽管我们只训练损坏的图像。通常,我们的调查突出了对生成模型的优化方法研究的重要性,以提高性能。

We combine two popular optimization approaches to derive learning algorithms for generative models: variational optimization and evolutionary algorithms. The combination is realized for generative models with discrete latents by using truncated posteriors as the family of variational distributions. The variational parameters of truncated posteriors are sets of latent states. By interpreting these states as genomes of individuals and by using the variational lower bound to define a fitness, we can apply evolutionary algorithms to realize the variational loop. The used variational distributions are very flexible and we show that evolutionary algorithms can effectively and efficiently optimize the variational bound. Furthermore, the variational loop is generally applicable ("black box") with no analytical derivations required. To show general applicability, we apply the approach to three generative models (we use noisy-OR Bayes Nets, Binary Sparse Coding, and Spike-and-Slab Sparse Coding). To demonstrate effectiveness and efficiency of the novel variational approach, we use the standard competitive benchmarks of image denoising and inpainting. The benchmarks allow quantitative comparisons to a wide range of methods including probabilistic approaches, deep deterministic and generative networks, and non-local image processing methods. In the category of "zero-shot" learning (when only the corrupted image is used for training), we observed the evolutionary variational algorithm to significantly improve the state-of-the-art in many benchmark settings. For one well-known inpainting benchmark, we also observed state-of-the-art performance across all categories of algorithms although we only train on the corrupted image. In general, our investigations highlight the importance of research on optimization methods for generative models to achieve performance improvements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源