论文标题
重新审视的稀疏高斯工艺:贝叶斯的诱导变化近似方法
Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations
论文作者
论文摘要
基于诱导变量的变异推理技术为高斯过程(GP)模型中可扩展后估计的框架提供了优雅的框架。除了启用可伸缩性外,使用直接边际可能性最大化的稀疏近似值之一是它们为诱导输入的点估计(即诱导变量的位置)提供了可靠的替代方案。在这项工作中,我们挑战了在变化框架中优化诱导输入的共同智慧,可以产生最佳性能。我们表明,通过重新审视诸如完全独立的训练条件等旧模型近似,并具有强大的基于采样的推理方法,以贝叶斯的方式处理诱导位置和GP超参数可以显着提高性能。基于随机梯度哈密顿蒙特卡洛(Monte Carlo),我们开发了一种完全贝叶斯的方法来扩展GP和Deep GP模型,并通过在几个回归和分类问题的广泛实验活动中证明其最先进的性能。
Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models. Besides enabling scalability, one of their main advantages over sparse approximations using direct marginal likelihood maximization is that they provide a robust alternative for point estimation of the inducing inputs, i.e. the location of the inducing variables. In this work we challenge the common wisdom that optimizing the inducing inputs in the variational framework yields optimal performance. We show that, by revisiting old model approximations such as the fully-independent training conditionals endowed with powerful sampling-based inference methods, treating both inducing locations and GP hyper-parameters in a Bayesian way can improve performance significantly. Based on stochastic gradient Hamiltonian Monte Carlo, we develop a fully Bayesian approach to scalable GP and deep GP models, and demonstrate its state-of-the-art performance through an extensive experimental campaign across several regression and classification problems.