论文标题

用于通道解码的图形神经网络

Graph Neural Networks for Channel Decoding

论文作者

Cammerer, Sebastian, Hoydis, Jakob, Aoudia, Fayçal Aït, Keller, Alexander

论文摘要

在这项工作中,我们提出了一个完全可区分的图形神经网络(GNN)的架构,用于通道解码和展示各种编码方案的竞争性解码性能,例如低密度奇偶校验检查(LDPC)和BCH代码。这个想法是让神经网络(NN)通过给定图的通用消息传递算法,该算法通过用可训练的函数替换节点和边缘消息更新来代表正向误差校正(FEC)代码结构。与许多其他基于深度学习的解码方法相反,提出的解决方案享有对任意块长度的可扩展性,并且训练不受维数的诅咒的限制。我们在传统渠道解码中针对最新的解码以及最近基于深度学习的结果对我们提出的解码器进行基准测试。对于(63,45)BCH代码,我们的解决方案的表现优于加权信念传播(BP)的解码约0.4 dB,而解码迭代率明显较小,甚至对于5G NR LDPC代码,我们观察到与常规BP解码相比,我们观察到竞争性能。对于BCH代码,所得的GNN解码器只能以9640个权重进行完全参数。

In this work, we propose a fully differentiable graph neural network (GNN)-based architecture for channel decoding and showcase a competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes. The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph that represents the forward error correction (FEC) code structure by replacing node and edge message updates with trainable functions. Contrary to many other deep learning-based decoding approaches, the proposed solution enjoys scalability to arbitrary block lengths and the training is not limited by the curse of dimensionality. We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results. For the (63,45) BCH code, our solution outperforms weighted belief propagation (BP) decoding by approximately 0.4 dB with significantly less decoding iterations and even for 5G NR LDPC codes, we observe a competitive performance when compared to conventional BP decoding. For the BCH codes, the resulting GNN decoder can be fully parametrized with only 9640 weights.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源