论文标题
适应性有限和学习分布最佳设计的线性匪徒
Linear Bandits with Limited Adaptivity and Learning Distributional Optimal Design
论文作者
论文摘要
在实用需求(例如大规模学习)的动机上,我们研究了适应性约束对线性上下文匪徒的影响,这是在线积极学习中的核心问题。我们考虑了文献中两个流行的有限适应性模型:批处理学习和罕见的政策转换。我们表明,当上下文矢量在$ d $维线性的上下文匪徒中被对抗时,学习者需要$ o(d \ log d \ log d \ log t)$策略切换以实现最小的遗憾,这是最佳的,这是最佳的,高达$ \ mathrm {poly}(\ log log d,\ log log log \ log \ log \ log \ log \ log \ log \ log \ log \ log \ fastion \ fastion f act)$ \ \ \ \ fasties t t t T)对于随机上下文向量,即使在更有限的批处理学习模型中,也只需要$ o(\ log \ log t)$批量才能实现最佳遗憾。与文献中的已知结果一起,我们的结果介绍了有关线性上下文匪徒中适应性约束的完整图片。一路上,我们提出了分布最佳设计,最佳实验设计的自然扩展,并为问题提供了统计和计算有效的学习算法,这可能具有独立的兴趣。
Motivated by practical needs such as large-scale learning, we study the impact of adaptivity constraints to linear contextual bandits, a central problem in online active learning. We consider two popular limited adaptivity models in literature: batch learning and rare policy switches. We show that, when the context vectors are adversarially chosen in $d$-dimensional linear contextual bandits, the learner needs $O(d \log d \log T)$ policy switches to achieve the minimax-optimal regret, and this is optimal up to $\mathrm{poly}(\log d, \log \log T)$ factors; for stochastic context vectors, even in the more restricted batch learning model, only $O(\log \log T)$ batches are needed to achieve the optimal regret. Together with the known results in literature, our results present a complete picture about the adaptivity constraints in linear contextual bandits. Along the way, we propose the distributional optimal design, a natural extension of the optimal experiment design, and provide a both statistically and computationally efficient learning algorithm for the problem, which may be of independent interest.