论文标题
关于通过多目标延续处理L1罚款条款的优化问题
On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation
论文作者
论文摘要
我们提出了一种新颖的算法,使我们能够详细了解线性和非线性优化中稀疏性的影响,这在许多科学领域(例如图像和信号处理,医学成像,压缩感测和机器学习(例如,用于神经网络训练))非常重要。稀疏性是确保对嘈杂数据的鲁棒性的重要特征,并且由于相关术语数量少而找到可解释且易于分析的模型。通过添加$ \ ell_1 $ norm作为加权惩罚期限来实施稀疏是普遍的做法。为了获得更好的理解并允许知情的模型选择,我们直接求解当我们最小化主目标和同时最小化$ \ ell_1 $ norm时会产生的相应多目标优化问题(MOP)。由于该拖把在非线性目标中通常是非凸面,因此加权方法将无法提供所有最佳妥协。为了避免此问题,我们提出了一种延续方法,该方法是专门针对具有两个目标功能的拖把量身定制的,其中之一是$ \ ell_1 $ -norm。我们的方法可以看作是针对非线性病例的线性回归问题的众所周知的同型方法的概括。几个数值示例 - 包括神经网络培训 - 证明了我们的理论发现以及通过这种多主体方法可以获得的其他见解。
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning (e.g., for the training of neural networks). Sparsity is an important feature to ensure robustness against noisy data, but also to find models that are interpretable and easy to analyze due to the small number of relevant terms. It is common practice to enforce sparsity by adding the $\ell_1$-norm as a weighted penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when we minimize the main objective and the $\ell_1$-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the weighting method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method which is specifically tailored to MOPs with two objective functions one of which is the $\ell_1$-norm. Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight that can be gained by this multiobjective approach.