论文标题

Fairnorm:公平,快速的图形神经网络培训

FairNorm: Fair and Fast Graph Neural Network Training

论文作者

Kose, O. Deniz, Shen, Yanning

论文摘要

已证明图形神经网络(GNN)可以实现许多基于图的学​​习任务的最新,这导致其在各个领域的就业增加。但是,已经表明,GNN可能会继承甚至扩大训练数据中的偏差,从而导致对某些敏感组的不公平结果。同时,对GNNS的培训引入了其他挑战,例如缓慢的收敛性和可能的​​不稳定。面对这些局限性,这项工作提出了Fairnorm,Fairnorm是一个统一的归一化框架,可减少基于GNN的学习的偏见,同时还提供了更快的融合。具体而言,Fairnorm在具有可学习参数的不同敏感群体上采用公平意识的归一化算子来减少GNN的偏见。 Fairnorm的设计建立在阐明基于图的学​​习中偏见来源的分析之上。与公平感知到的基线相比,对现实世界网络的节点分类实验证明了拟议方案在统计平等和均衡机会方面提高公平性的效率。此外,从经验上表明,与未使用归一化的天真基线相比,所提出的框架会导致更快的收敛。

Graph neural networks (GNNs) have been demonstrated to achieve state-of-the-art for a number of graph-based learning tasks, which leads to a rise in their employment in various domains. However, it has been shown that GNNs may inherit and even amplify bias within training data, which leads to unfair results towards certain sensitive groups. Meanwhile, training of GNNs introduces additional challenges, such as slow convergence and possible instability. Faced with these limitations, this work proposes FairNorm, a unified normalization framework that reduces the bias in GNN-based learning while also providing provably faster convergence. Specifically, FairNorm employs fairness-aware normalization operators over different sensitive groups with learnable parameters to reduce the bias in GNNs. The design of FairNorm is built upon analyses that illuminate the sources of bias in graph-based learning. Experiments on node classification over real-world networks demonstrate the efficiency of the proposed scheme in improving fairness in terms of statistical parity and equal opportunity compared to fairness-aware baselines. In addition, it is empirically shown that the proposed framework leads to faster convergence compared to the naive baseline where no normalization is employed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源