论文标题

逐步学习和分层表示

Progressive Learning and Disentanglement of Hierarchical Representations

论文作者

Li, Zhiyuan, Murkute, Jaideep Vitthal, Gyawali, Prashnna Kumar, Wang, Linwei

论文摘要

从数据中学习丰富的表示是深层生成模型的重要任务,例如变异自动编码器(VAE)。但是,通过在自下而上的推理过程中提取高级抽象,损害了自上而下生成的所有变化因素的目标。在“启动小”的概念中,我们提出了一种策略,以逐步学习从高级到低级抽象的独立层次表示。该模型始于学习最抽象的表示,然后逐步发展网络体系结构,以不同的抽象级别引入新表示。我们定量地证明了使用三个分离指标在两个基准数据集上的现有作品相比,提出的模型可以改善分解的能力,其中包括我们提出的一个新指标,以补充先前出现的相互信息差距的指标。我们进一步介绍了关于学习进展如何改善层次表示的分离的定性和定量证据。通过利用层次表示学习和渐进学习的各自优势,这是我们所知,这是通过逐步增强VAE学习层次表示的能力来改善解开的首次尝试。

Learning rich representation from data is an important task for deep generative models such as variational auto-encoder (VAE). However, by extracting high-level abstractions in the bottom-up inference process, the goal of preserving all factors of variations for top-down generation is compromised. Motivated by the concept of "starting small", we present a strategy to progressively learn independent hierarchical representations from high- to low-levels of abstractions. The model starts with learning the most abstract representation, and then progressively grow the network architecture to introduce new representations at different levels of abstraction. We quantitatively demonstrate the ability of the presented model to improve disentanglement in comparison to existing works on two benchmark data sets using three disentanglement metrics, including a new metric we proposed to complement the previously-presented metric of mutual information gap. We further present both qualitative and quantitative evidence on how the progression of learning improves disentangling of hierarchical representations. By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源