论文标题
理论保证GNN比NNS优势在欧几里得立方体上概括了频带的功能
Theoretical guarantees for the advantage of GNNs over NNs in generalizing bandlimited functions on Euclidean cubes
论文作者
论文摘要
图形神经网络(GNN)已成为跨不同应用程序处理基于图的信息的强大资源。尽管传统上在图形任务的背景下对GNN的表达力进行了检查,但它们进行节点级任务的潜力,例如节点分类,其目标是从观察到的node标签中插值丢失的节点标签,但仍然相对尚无探索。在这项研究中,我们研究了GNNs对此类分类的熟练程度,这也可以作为函数插值问题施放。明确地,我们专注于确定GNN成功地插入欧几里得立方体插入带限制功能所需的权重和层的最佳配置。我们的发现强调了利用GNN在$ \ varepsilon $ -Error边距中概括的功能方面的明显效率。值得注意的是,完成此任务仅需要$ o_d(((\ log \ varepsilon^{ - 1})^d)$ striges和$ o_d(((\ log \ log \ varepsilon^{ - 1})^d)$训练样本。我们探讨了该标准如何与专为类似任务设计的当前可用神经网络(NNS)的明确结构堆叠。值得注意的是,我们的结果是通过在GNN结构和经典抽样定理之间建立创新的联系而获得的。从本质上讲,我们的开创性工作标志着对研究领域的有意义的贡献,促进了我们对实际GNN应用的理解。
Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications. While the expressive power of GNNs has traditionally been examined in the context of graph-level tasks, their potential for node-level tasks, such as node classification, where the goal is to interpolate missing node labels from the observed ones, remains relatively unexplored. In this study, we investigate the proficiency of GNNs for such classifications, which can also be cast as a function interpolation problem. Explicitly, we focus on ascertaining the optimal configuration of weights and layers required for a GNN to successfully interpolate a band-limited function over Euclidean cubes. Our findings highlight a pronounced efficiency in utilizing GNNs to generalize a bandlimited function within an $\varepsilon$-error margin. Remarkably, achieving this task necessitates only $O_d((\log\varepsilon^{-1})^d)$ weights and $O_d((\log\varepsilon^{-1})^d)$ training samples. We explore how this criterion stacks up against the explicit constructions of currently available Neural Networks (NNs) designed for similar tasks. Significantly, our result is obtained by drawing an innovative connection between the GNN structures and classical sampling theorems. In essence, our pioneering work marks a meaningful contribution to the research domain, advancing our understanding of the practical GNN applications.