论文标题

通过同型方法加速套索计算中的热身阶段

Accelerate the Warm-up Stage in the Lasso Computation via a Homotopic Approach

论文作者

Zhao, Yujie, Huo, Xiaoming

论文摘要

在优化中,众所周知,当目标函数严格凸出并且条件良好时,基于梯度的方法可能非常有效,例如,达到指数的收敛速率。另一方面,由于原点上绝对函数的不良行为,现有的套索型估计量通常无法实现最佳速率。一种同型方法是使用一系列替代函数来近似估计器套件中使用的$ \ ell_1 $惩罚。替代功能将收敛到Lasso估算器中的$ \ ell_1 $罚款。同时,每个替代函数都严格凸出,这可以实现可证明的更快的收敛速率。在本文中,我们证明,通过精心定义替代函数,可以证明比计算估计量套索类型的任何现有方法更快的数值收敛速率。也就是说,最先进的算法只能保证$ o(1/ε)$或$ o(1/\sqrtε)$收敛速率,而我们可以证明新提出的算法的$ O([\ log(1/ε)^2)$。我们的数值模拟表明,新算法在经验上的性能也更好。

In optimization, it is known that when the objective functions are strictly convex and well-conditioned, gradient-based approaches can be extremely effective, e.g., achieving the exponential rate of convergence. On the other hand, the existing Lasso-type estimator in general cannot achieve the optimal rate due to the undesirable behavior of the absolute function at the origin. A homotopic method is to use a sequence of surrogate functions to approximate the $\ell_1$ penalty that is used in the Lasso-type of estimators. The surrogate functions will converge to the $\ell_1$ penalty in the Lasso estimator. At the same time, each surrogate function is strictly convex, which enables a provable faster numerical rate of convergence. In this paper, we demonstrate that by meticulously defining the surrogate functions, one can prove a faster numerical convergence rate than any existing methods in computing for the Lasso-type of estimators. Namely, the state-of-the-art algorithms can only guarantee $O(1/ε)$ or $O(1/\sqrtε)$ convergence rates, while we can prove an $O([\log(1/ε)]^2)$ for the newly proposed algorithm. Our numerical simulations show that the new algorithm also performs better empirically.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源