论文标题
通过张量QR分解和$ L_ {2,1} $ - norm-Norm最小化完成张量完成
Tensor Completion via Tensor QR Decomposition and $L_{2,1}$-Norm Minimization
论文作者
论文摘要
在本文中,我们考虑了张量的完成问题,该问题使机器学习中的许多研究人员特别关注。我们的快速而精确的方法建立在将$ l_ {2,1} $ - 标准最小化和卡塔尔riyal分解(LNM-QR)方法(lnm-qr)方法中进行矩阵完成到张量的完成,并且与流行的张张量完成方法不同,使用张量的张量式张量式的值分解(T-SVD)。在缩短计算时间方面,T-SVD被计算基于卡塔尔里亚尔分解(CTSVD-QR)的方法替换,可用于计算最大的$ r \ weft(r> 0 \ right)$ singular值(tubes)及其相关的sickular vector vectors(r> 0 \ right \ right)。此外,我们使用张量$ l_ {2,1} $ - norm而不是张量核标准来最大程度地减少我们的模型,因此很容易优化。然后在提高准确性方面,基于梯度搜索的方法ADMM在我们的方法中起着至关重要的作用。数值实验结果表明,我们的方法比那些最先进的算法更快,并且具有良好的精度。
In this paper, we consider the tensor completion problem, which has many researchers in the machine learning particularly concerned. Our fast and precise method is built on extending the $L_{2,1}$-norm minimization and Qatar Riyal decomposition (LNM-QR) method for matrix completions to tensor completions, and is different from the popular tensor completion methods using the tensor singular value decomposition (t-SVD). In terms of shortening the computing time, t-SVD is replaced with the method computing an approximate t-SVD based on Qatar Riyal decomposition (CTSVD-QR), which can be used to compute the largest $r \left(r>0 \right)$ singular values (tubes) and their associated singular vectors (of tubes) iteratively. We, in addition, use the tensor $L_{2,1}$-norm instead of the tensor nuclear norm to minimize our model on account of it is easy to optimize. Then in terms of improving accuracy, ADMM, a gradient-search-based method, plays a crucial part in our method. Numerical experimental results show that our method is faster than those state-of-the-art algorithms and have excellent accuracy.