论文标题

通过非lipschitz基质浓度的幅度流量的最佳下降下降的样品复杂性

Optimal Sample Complexity of Subgradient Descent for Amplitude Flow via Non-Lipschitz Matrix Concentration

论文作者

Hand, Paul, Leong, Oscar, Voroninski, Vladislav

论文摘要

我们考虑了从$ m $ phaseless,线性测量并分析基于振幅的非平滑最小二乘目标的实现的$ n $维信号的问题。我们基于目标梯度动力学引起的随机,不连续的基质值算子的均匀浓度,建立了亚级别下降的局部收敛性,具有最佳样品复杂性。尽管建立均匀浓度的随机函数的通用技术利用Lipschitz的连续性,但我们证明,当测量向量是高斯$ m =ω(n)$具有较高的可能性时,不连续的矩阵值算子可以满足均匀的矩阵浓度不平等。然后,我们表明这种不平等的满意度足以使下降和适当的初始化以线性收敛到真实解决方案,直至全局符号歧义。结果,这保证了在最佳样品复杂性下高斯测量的局部收敛。以前,本工作中的集中度方法已用于在生成神经网络先验下为各种反问题建立恢复保证。本文展示了这些技术对更传统的反问题的适用性,并将其作为这些结果的教学介绍。

We consider the problem of recovering a real-valued $n$-dimensional signal from $m$ phaseless, linear measurements and analyze the amplitude-based non-smooth least squares objective. We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator arising from the objective's gradient dynamics. While common techniques to establish uniform concentration of random functions exploit Lipschitz continuity, we prove that the discontinuous matrix-valued operator satisfies a uniform matrix concentration inequality when the measurement vectors are Gaussian as soon as $m = Ω(n)$ with high probability. We then show that satisfaction of this inequality is sufficient for subgradient descent with proper initialization to converge linearly to the true solution up to the global sign ambiguity. As a consequence, this guarantees local convergence for Gaussian measurements at optimal sample complexity. The concentration methods in the present work have previously been used to establish recovery guarantees for a variety of inverse problems under generative neural network priors. This paper demonstrates the applicability of these techniques to more traditional inverse problems and serves as a pedagogical introduction to those results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源