论文标题
图形神经网络的黑盒节点注射攻击
Black-box Node Injection Attack for Graph Neural Networks
论文作者
论文摘要
多年来,图形神经网络(GNN)引起了极大的关注,并广泛应用于需要高安全标准的重要领域,例如产品建议和流量预测。在这种情况下,利用GNN的脆弱性并进一步降低其分类性能成为对手的高度激励。以前的攻击者主要关注现有图的结构扰动。尽管他们提供了令人鼓舞的结果,但实际的实施需要能够操纵图形连接的能力,这在某些情况下是不切实际的。在这项工作中,我们研究了注射节点逃避受害者GNN模型的可能性,并且与以前的白色盒子设置相关的作品不同,我们大大限制了可访问知识的量并探索黑盒子设置。具体而言,我们将节点注入攻击建模为马尔可夫决策过程,并以优势演员评论家的方式提出了GA2C,即GA2C(一种图形增强的学习框架),以生成注入的节点的现实特征,并根据相同的拓扑特征将其无缝合并到原始图中。通过对多个公认的基准数据集进行的广泛实验,我们证明了我们提出的GA2C的优越性能,而不是现有的最新方法。数据和源代码可公开访问:https://github.com/jumxglhf/ga2c。
Graph Neural Networks (GNNs) have drawn significant attentions over the years and been broadly applied to vital fields that require high security standard such as product recommendation and traffic forecasting. Under such scenarios, exploiting GNN's vulnerabilities and further downgrade its classification performance become highly incentive for adversaries. Previous attackers mainly focus on structural perturbations of existing graphs. Although they deliver promising results, the actual implementation needs capability of manipulating the graph connectivity, which is impractical in some circumstances. In this work, we study the possibility of injecting nodes to evade the victim GNN model, and unlike previous related works with white-box setting, we significantly restrict the amount of accessible knowledge and explore the black-box setting. Specifically, we model the node injection attack as a Markov decision process and propose GA2C, a graph reinforcement learning framework in the fashion of advantage actor critic, to generate realistic features for injected nodes and seamlessly merge them into the original graph following the same topology characteristics. Through our extensive experiments on multiple acknowledged benchmark datasets, we demonstrate the superior performance of our proposed GA2C over existing state-of-the-art methods. The data and source code are publicly accessible at: https://github.com/jumxglhf/GA2C.