论文标题

塔克张量因子模型:矩阵和模式的PCA估计

Tucker tensor factor models: matricization and mode-wise PCA estimation

论文作者

Zhang, Xu, Li, Guodong, Liu, Catherine C., Guo, Jianhua

论文摘要

高维的高阶张量数据在各个领域都在广泛地突出,包括但不限于计算机视觉和网络分析。张量因子模型是由张量分解或因素化的嘈杂版本引起的,是研究可能依赖或独立的张量变化物体集合的天然有效工具。但是,它仍处于开发统计推断理论的早期阶段,以估计各种低级结构,这些结构习惯性地发挥了张量因子模型的信号。在本文中,我们试图通过利用张量进样来估算高阶张量因子模型的估计。具体来说,我们将其重新呈现为传统的传统高维矢量/纤维因子模型,从而实现了传统的主体组件分析(PCA)的估计(iS exteriation in IS exteriation。广泛使用的塔克分解,我们总结了信号组件的估计本质上是模式的PCA技术,并且投影和迭代的参与将增强信号 - 噪声比率,在各个范围内,我们建立了拟议的估计器的推论理论。 分别。

High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源