论文标题
用于大规模稳健估计的渐变滤波器方法
A Graduated Filter Method for Large Scale Robust Estimation
论文作者
论文摘要
由于大规模鲁棒参数估计的高度非凸性性质,避免局部最小值较差在现实世界应用中挑战,在现实世界中,输入数据被大量或未知的异常值污染。在本文中,我们引入了一个新颖的求解器,以实现强大的估计,该估计具有强大的逃脱局部最小值的能力。我们的算法建立在一类传统的渐变优化技术的基础上,这些技术被认为是解决许多较小较小的问题的最先进的当地方法。我们工作的新颖性在于引入自适应核(或残差)缩放方案,这使我们能够达到更快的收敛速率。就像其他旨在返回良好本地最小值以进行健壮估计任务的现有方法一样,我们的方法放松了原始的强大问题,但可以从非线性限制优化的优化中调整过滤器框架,以自动选择放松的水平。对实际大规模数据集(例如捆绑调整实例)的实验结果表明,我们提出的方法可以实现竞争结果。
Due to the highly non-convex nature of large-scale robust parameter estimation, avoiding poor local minima is challenging in real-world applications where input data is contaminated by a large or unknown fraction of outliers. In this paper, we introduce a novel solver for robust estimation that possesses a strong ability to escape poor local minima. Our algorithm is built upon the class of traditional graduated optimization techniques, which are considered state-of-the-art local methods to solve problems having many poor minima. The novelty of our work lies in the introduction of an adaptive kernel (or residual) scaling scheme, which allows us to achieve faster convergence rates. Like other existing methods that aim to return good local minima for robust estimation tasks, our method relaxes the original robust problem but adapts a filter framework from non-linear constrained optimization to automatically choose the level of relaxation. Experimental results on real large-scale datasets such as bundle adjustment instances demonstrate that our proposed method achieves competitive results.