论文标题

本地线性嵌入及其变体:教程和调查

Locally Linear Embedding and its Variants: Tutorial and Survey

论文作者

Ghojogh, Benyamin, Ghodsi, Ali, Karray, Fakhri, Crowley, Mark

论文摘要

这是用于本地线性嵌入(LLE)及其变体的教程和调查论文。 LLE的想法使嵌入空间中的歧管的局部结构拟合。在本文中,我们首先覆盖LLE,内核LLE,逆LLE,并与LLE融合。然后,我们使用线性重建,特征函数和内核映射介绍样本外嵌入。解释了嵌入流数据的增量LLE。使用NyStrom近似和局部线性地标的地标LLE方法解释了大数据嵌入。我们介绍了使用剩余方差,Procrustes统计信息,保存邻居误差和本地邻里选择的参数选择邻居数量的方法。之后,向有监督的LLE(SLLE),增强的SLLE,SLLE投影,概率SLLE,有监督的指导性LLE(使用Hilbert-Schmidt独立标准)和半监督的LLE解释了监督和半手不足的嵌入。还引入了使用最小二乘问题和惩罚功能的强大LLE方法,以嵌入在异常值和噪声的存在下。然后,我们将LLE融合与其他流形学习方法的融合,包括ISOMAP(即伊斯兰),主成分分析,Fisher判别分析,判别LLE和ISOTOP。最后,我们解释了加权LLE,其中调整了距离,重建权重或嵌入的嵌入以更好地嵌入;我们覆盖了变形分布式数据的加权LLE,使用发生的概率,通过调节权重,改良的LLE和迭代LLE加权LLE。

This is a tutorial and survey paper for Locally Linear Embedding (LLE) and its variants. The idea of LLE is fitting the local structure of manifold in the embedding space. In this paper, we first cover LLE, kernel LLE, inverse LLE, and feature fusion with LLE. Then, we cover out-of-sample embedding using linear reconstruction, eigenfunctions, and kernel mapping. Incremental LLE is explained for embedding streaming data. Landmark LLE methods using the Nystrom approximation and locally linear landmarks are explained for big data embedding. We introduce the methods for parameter selection of number of neighbors using residual variance, Procrustes statistics, preservation neighborhood error, and local neighborhood selection. Afterwards, Supervised LLE (SLLE), enhanced SLLE, SLLE projection, probabilistic SLLE, supervised guided LLE (using Hilbert-Schmidt independence criterion), and semi-supervised LLE are explained for supervised and semi-supervised embedding. Robust LLE methods using least squares problem and penalty functions are also introduced for embedding in the presence of outliers and noise. Then, we introduce fusion of LLE with other manifold learning methods including Isomap (i.e., ISOLLE), principal component analysis, Fisher discriminant analysis, discriminant LLE, and Isotop. Finally, we explain weighted LLE in which the distances, reconstruction weights, or the embeddings are adjusted for better embedding; we cover weighted LLE for deformed distributed data, weighted LLE using probability of occurrence, SLLE by adjusting weights, modified LLE, and iterative LLE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源