论文标题
通过迭代神经网络的端到端培训进行卷积词典学习
Convolutional Dictionary Learning by End-To-End Training of Iterative Neural Networks
论文作者
论文摘要
基于稀疏性的方法在信号处理领域具有悠久的历史,并已成功应用于各种图像重建问题。涉及的稀疏转换或词典通常使用模型进行预训练,该模型反映了信号的假定特性,或者在重建过程中自适应地学习 - 产生所谓的盲人压缩传感方法。但是,通过这样做,将永远不会与生成信号的物理模型一起明确训练。此外,正确选择所涉及的正规化参数仍然是一项具有挑战性的任务。正规化方法的另一个最近出现的训练范式是使用迭代神经网络(INNS)(也称为展开网络),其中包含物理模型。在这项工作中,我们构建了一个旅馆,该旅馆可以用作受监督和物理知识的在线卷积词典学习算法。我们通过将其应用于现实的大规模动态MR重建问题来评估了提出的方法,并将其与其他最近发表的作品进行了比较。我们表明,与Deep Inn相比,拟议的旅馆改进了两种传统的模型不足训练方法,并产生竞争成果。此外,它不需要选择正则化参数,并且与深度旅馆形成鲜明对比 - 每个网络组件都是完全可以解释的。
Sparsity-based methods have a long history in the field of signal processing and have been successfully applied to various image reconstruction problems. The involved sparsifying transformations or dictionaries are typically either pre-trained using a model which reflects the assumed properties of the signals or adaptively learned during the reconstruction - yielding so-called blind Compressed Sensing approaches. However, by doing so, the transforms are never explicitly trained in conjunction with the physical model which generates the signals. In addition, properly choosing the involved regularization parameters remains a challenging task. Another recently emerged training-paradigm for regularization methods is to use iterative neural networks (INNs) - also known as unrolled networks - which contain the physical model. In this work, we construct an INN which can be used as a supervised and physics-informed online convolutional dictionary learning algorithm. We evaluated the proposed approach by applying it to a realistic large-scale dynamic MR reconstruction problem and compared it to several other recently published works. We show that the proposed INN improves over two conventional model-agnostic training methods and yields competitive results also compared to a deep INN. Further, it does not require to choose the regularization parameters and - in contrast to deep INNs - each network component is entirely interpretable.