论文标题

更快的变压器解码:n-gram掩盖自我注意

Faster Transformer Decoding: N-gram Masked Self-Attention

论文作者

Chelba, Ciprian, Chen, Mia, Bapna, Ankur, Shazeer, Noam

论文摘要

从源句子$ s = s_1,\ ldots,s_s $中汲取了与目标令牌预测相关的大多数信息的动机,我们建议通过做出$ n $ gram假设来截断用于计算自我注意力的目标端窗口。 WMT ENDE和ENFR数据集的实验表明,$ n $ gram蒙版的自我发项模型在$ n $ valuep的$ 4,\ ldots,8 $的$ n $值中损失很小,具体取决于任务。

Motivated by the fact that most of the information relevant to the prediction of target tokens is drawn from the source sentence $S=s_1, \ldots, s_S$, we propose truncating the target-side window used for computing self-attention by making an $N$-gram assumption. Experiments on WMT EnDe and EnFr data sets show that the $N$-gram masked self-attention model loses very little in BLEU score for $N$ values in the range $4, \ldots, 8$, depending on the task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源