论文标题
有效计算贝叶斯优化的知识梯度
Efficient computation of the Knowledge Gradient for Bayesian Optimization
论文作者
论文摘要
贝叶斯优化是一种强大的方法集合,用于优化随机昂贵的黑匣子功能。贝叶斯优化算法的一个关键组成部分是采集函数,该函数确定应在每种迭代中评估哪种解决方案。知识梯度采集函数是一个流行且非常有效的选择,但是没有分析方法来计算它。几种不同的实现会产生不同的近似值。在本文中,我们审查和比较了知识梯度实现的范围,并提出了一声混合KG,这是一种新方法,结合了几种先前提出的思想,并且既便宜又易于计算和有效。我们证明了新方法保留了先前方法的理论特性,并以相等或改善的性能从经验上显示了大幅度降低的计算开销。所有实验均在Botorch中实现,并且在GitHub上可以使用代码。
Bayesian optimization is a powerful collection of methods for optimizing stochastic expensive black box functions. One key component of a Bayesian optimization algorithm is the acquisition function that determines which solution should be evaluated in every iteration. A popular and very effective choice is the Knowledge Gradient acquisition function, however there is no analytical way to compute it. Several different implementations make different approximations. In this paper, we review and compare the spectrum of Knowledge Gradient implementations and propose One-shot Hybrid KG, a new approach that combines several of the previously proposed ideas and is cheap to compute as well as powerful and efficient. We prove the new method preserves theoretical properties of previous methods and empirically show the drastically reduced computational overhead with equal or improved performance. All experiments are implemented in BOTorch and code is available on github.