论文标题
注意立方体网络以恢复图像
Attention Cube Network for Image Restoration
论文作者
论文摘要
最近,深度卷积神经网络(CNN)已广泛用于图像恢复,并获得了巨大的成功。但是,大多数现有方法仅限于局部接受领域和对不同类型信息的平等处理。此外,现有方法始终使用多种监督的方法来汇总不同的特征图,该特征图无法有效地汇总分层特征信息。为了解决这些问题,我们提出了一个注意立方体网络(A-Cubenet),用于图像恢复,以进行更强大的特征表达和功能相关学习。具体而言,我们从三个维度,即空间维度,通道维度和分层维度设计了一种新颖的注意机制。自适应空间注意力分支(ASAB)和自适应通道注意力分支(ACAB)构成了自适应双重注意模块(ADAM),该模块可以捕获远程空间和渠道的上下文信息,以扩大接受场并区分不同类型的信息,以获得更有效的特征表示。此外,自适应分层注意模块(AHAM)可以捕获远程层次结构的上下文信息,从而根据全局上下文弹性地按权重汇总不同的特征图。亚当和阿哈姆(Adam)和阿哈姆(Aham)合作形成了“注意力的关注”结构,这意味着ASAB和ACAB增强了Aham的输入。实验证明了我们方法比定量比较和视觉分析中的最新图像恢复方法的优越性。代码可在https://github.com/ychang686/a-cubenet上找到。
Recently, deep convolutional neural network (CNN) have been widely used in image restoration and obtained great success. However, most of existing methods are limited to local receptive field and equal treatment of different types of information. Besides, existing methods always use a multi-supervised method to aggregate different feature maps, which can not effectively aggregate hierarchical feature information. To address these issues, we propose an attention cube network (A-CubeNet) for image restoration for more powerful feature expression and feature correlation learning. Specifically, we design a novel attention mechanism from three dimensions, namely spatial dimension, channel-wise dimension and hierarchical dimension. The adaptive spatial attention branch (ASAB) and the adaptive channel attention branch (ACAB) constitute the adaptive dual attention module (ADAM), which can capture the long-range spatial and channel-wise contextual information to expand the receptive field and distinguish different types of information for more effective feature representations. Furthermore, the adaptive hierarchical attention module (AHAM) can capture the long-range hierarchical contextual information to flexibly aggregate different feature maps by weights depending on the global context. The ADAM and AHAM cooperate to form an "attention in attention" structure, which means AHAM's inputs are enhanced by ASAB and ACAB. Experiments demonstrate the superiority of our method over state-of-the-art image restoration methods in both quantitative comparison and visual analysis. Code is available at https://github.com/YCHang686/A-CubeNet.