论文标题
GTRANS:用于神经机器翻译的分组和融合变压器层
GTrans: Grouping and Fusing Transformer Layers for Neural Machine Translation
论文作者
论文摘要
变压器结构是由一系列编码器和解码器网络层堆叠的,在神经机器翻译中实现了重大发展。但是,假设下层提供微不足道或冗余的信息,从而忽略了潜在有价值的底层特征,那么香草变压器主要利用顶层表示。在这项工作中,我们提出了组转换器模型(GTRAN),该模型将编码器和解码器的多层表示分为不同的组,然后融合这些组特征以生成目标词。为了证实该方法的有效性,对三个双语翻译基准和两个多语言翻译任务进行了广泛的实验和分析实验,包括IWLST-14,IWLST-17,IWLST-17,LDC,WMT-14和OPUS-100基准。实验和分析结果表明,我们的模型通过一致的增益优于其变压器对应物。此外,它可以成功扩展到60个编码层和36个解码器层。
Transformer structure, stacked by a sequence of encoder and decoder network layers, achieves significant development in neural machine translation. However, vanilla Transformer mainly exploits the top-layer representation, assuming the lower layers provide trivial or redundant information and thus ignoring the bottom-layer feature that is potentially valuable. In this work, we propose the Group-Transformer model (GTrans) that flexibly divides multi-layer representations of both encoder and decoder into different groups and then fuses these group features to generate target words. To corroborate the effectiveness of the proposed method, extensive experiments and analytic experiments are conducted on three bilingual translation benchmarks and two multilingual translation tasks, including the IWLST-14, IWLST-17, LDC, WMT-14 and OPUS-100 benchmark. Experimental and analytical results demonstrate that our model outperforms its Transformer counterparts by a consistent gain. Furthermore, it can be successfully scaled up to 60 encoder layers and 36 decoder layers.