论文标题
与截短线性模型的隐式无参数在线学习
Implicit Parameter-free Online Learning with Truncated Linear Models
论文作者
论文摘要
无参数算法是在线学习算法,不需要设置学习率。他们在初始点和任何竞争对手之间的距离方面获得了最佳的遗憾。但是,无参数算法没有考虑到损失的几何形状。最近,在随机优化文献中,已提出它使用截短的线性下限,从而通过更紧密地对损失进行建模来产生更好的性能。特别是,截短的线性模型大大减少了超出损失函数的最小值的问题。不幸的是,截断的线性模型不能与无参数算法一起使用,因为更新变得非常昂贵。在本文中,我们提出了新的无参数算法,可以通过具有“隐式”风味的新更新来利用截短的线性模型。基于对遗憾的新型分解,新的更新是有效的,每个步骤只需要一个梯度,从不超过截断模型的最小值,并保留了无用的无参数属性。我们还进行了一项实证研究,证明了我们的算法的实际实用性。
Parameter-free algorithms are online learning algorithms that do not require setting learning rates. They achieve optimal regret with respect to the distance between the initial point and any competitor. Yet, parameter-free algorithms do not take into account the geometry of the losses. Recently, in the stochastic optimization literature, it has been proposed to instead use truncated linear lower bounds, which produce better performance by more closely modeling the losses. In particular, truncated linear models greatly reduce the problem of overshooting the minimum of the loss function. Unfortunately, truncated linear models cannot be used with parameter-free algorithms because the updates become very expensive to compute. In this paper, we propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an "implicit" flavor. Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties. We also conduct an empirical study demonstrating the practical utility of our algorithms.