论文标题
hodlr $ d $ d:一种新的黑盒快速算法,用于$ n $ - $ d $ dimensions中的带有保证错误界限
HODLR$d$D: A new Black-box fast algorithm for $N$-body problems in $d$-dimensions with guaranteed error bounds
论文作者
论文摘要
在本文中,我们证明了新定理,这些定理界定了这些内核函数引起的不同子读数的等级。这样的界限通常对于分析各种层次矩阵算法的复杂性很有用。我们还绘制了由$ 1 $ d,$ 2 $ d,$ 3 $ d和$ 4 $ d的各种核电函数产生的不同子诊断的数值等级增长,这毫不奇怪,这与拟议的定理一致。本文的另一个重要贡献是,使用获得的等级界限,我们还提出了一种扩展\ textbf {\ emph {\ emph {feal-Admissibility}}概念的方法,以便在较高的维度中进行层次矩阵。基于此提出的\ TextBf {\ Emph {弱加热}}条件,我们为$ n $ body问题开发了一个黑色框(与内核无关)快速算法,$ n $ body问题,层面上的低率矩阵$ d $ d $ d $ d $ d $ d $ d $ d $ d $ d $ d $ pn of preix-of for preix-vector port of f. \ log(n))$复杂性在任何尺寸$ d $中,其中$ p $不会以$ n $的任何功率增长。更确切地说,我们的定理保证$ p \ in \ Mathcal {o}(\ log(n)\ log^d(\ log(n)))$,这意味着我们的hodlr $ d $ d algorithm缩放几乎是线性的。 $ \ texttt {c ++} $实现了\ texttt {openmp} hodlr $ d $ d的并联化,请访问\ url {https://github.com/safran-lab/hodlrdd}。我们还讨论了HODLR $ d $ d算法的可伸缩性,并通过求解$ 4 $尺寸的积分方程来展示适用性,并为具有四个功能和五个功能的数据集加速支持向量机(SVM)的训练阶段。
In this article, we prove new theorems bounding the rank of different sub-matrices arising from these kernel functions. Bounds like these are often useful for analyzing the complexity of various hierarchical matrix algorithms. We also plot the numerical rank growth of different sub-matrices arising out of various kernel functions in $1$D, $2$D, $3$D and $4$D, which, not surprisingly, agrees with the proposed theorems. Another significant contribution of this article is that, using the obtained rank bounds, we also propose a way to extend the notion of \textbf{\emph{weak-admissibility}} for hierarchical matrices in higher dimensions. Based on this proposed \textbf{\emph{weak-admissibility}} condition, we develop a black-box (kernel-independent) fast algorithm for $N$-body problems, hierarchically off-diagonal low-rank matrix in $d$ dimensions (HODLR$d$D), which can perform matrix-vector products with $\mathcal{O}(pN \log (N))$ complexity in any dimension $d$, where $p$ doesn't grow with any power of $N$. More precisely, our theorems guarantee that $p \in \mathcal{O} (\log (N) \log^d (\log (N)))$, which implies our HODLR$d$D algorithm scales almost linearly. The $\texttt{C++}$ implementation with \texttt{OpenMP} parallelization of the HODLR$d$D is available at \url{https://github.com/SAFRAN-LAB/HODLRdD}. We also discuss the scalability of the HODLR$d$D algorithm and showcase the applicability by solving an integral equation in $4$ dimensions and accelerating the training phase of the support vector machines (SVM) for the data sets with four and five features.