论文标题

HyperBO+:通过分层高斯过程进行贝叶斯优化的通用先验预先培训

HyperBO+: Pre-training a universal prior for Bayesian optimization with hierarchical Gaussian processes

论文作者

Fan, Zhou, Han, Xinran, Wang, Zi

论文摘要

贝叶斯优化(BO)虽然被证明对许多黑框功能优化任务非常有效,但要求从业者仔细选择能很好地模拟其感兴趣的功能的先生。研究人员没有手工指定基于转移学习的方法,以自动学习先验,例如多任务BO(Swersky等,2013),很少射击BO(Wistuba and Grabocka,2021)和Hyperbo(Wang等,2022)。但是,这些先前的学习方法通​​常假定所有任务的输入域都是相同的,从而削弱了它们在具有不同域的功能上使用观察值的能力,或者将学习的先验者推广到不同的搜索空间上。在这项工作中,我们提出了Hyperbo+:用于分层高斯过程的预训练方法,该方法在普遍起作用的贝叶斯对具有不同域的功能的贝叶斯优化之前可以实现相同。我们提出了一种两步的预训练方法,并分析了其吸引人的渐近特性和对理论上和经验上的益处。在涉及多个搜索空间的现实世界中的超参数调谐任务上,我们证明了Hyperbo+能够概括地看不见的搜索空间并获得比竞争性基线相比,遗憾的较低。

Bayesian optimization (BO), while proved highly effective for many black-box function optimization tasks, requires practitioners to carefully select priors that well model their functions of interest. Rather than specifying by hand, researchers have investigated transfer learning based methods to automatically learn the priors, e.g. multi-task BO (Swersky et al., 2013), few-shot BO (Wistuba and Grabocka, 2021) and HyperBO (Wang et al., 2022). However, those prior learning methods typically assume that the input domains are the same for all tasks, weakening their ability to use observations on functions with different domains or generalize the learned priors to BO on different search spaces. In this work, we present HyperBO+: a pre-training approach for hierarchical Gaussian processes that enables the same prior to work universally for Bayesian optimization on functions with different domains. We propose a two-step pre-training method and analyze its appealing asymptotic properties and benefits to BO both theoretically and empirically. On real-world hyperparameter tuning tasks that involve multiple search spaces, we demonstrate that HyperBO+ is able to generalize to unseen search spaces and achieves lower regrets than competitive baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源