论文标题

转移学习和子词抽样的不对称资源一对多神经翻译

Transfer learning and subword sampling for asymmetric-resource one-to-many neural translation

论文作者

Grönroos, Stig-Arne, Virpioja, Sami, Kurimo, Mikko

论文摘要

有几种改善低资源语言神经机器翻译的方法:可以通过预处理或扩大数据来利用单语言数据;相关语言对的并行语料库可以通过多语言模型中的参数共享或转移学习使用;子词细分和正则化技术可以应用以确保词汇的高覆盖范围。我们在不对称的一对多翻译任务的背景下回顾了这些方法,其中两对目标语言是相关的,一种是非常低的资源,另一种是一种更高的资源语言。我们测试了三个人工限制的翻译任务的各种方法 - 英语到爱沙尼亚语(低资源)和芬兰人(高资源),英语到斯洛伐克和捷克人,英语对丹麦语和瑞典语 - 以及一项现实世界,挪威人到北Sámi和Finnish。该实验表现出积极的效果,尤其是针对预定的多任务学习,denoising自动编码器和子字采样。

There are several approaches for improving neural machine translation for low-resource languages: Monolingual data can be exploited via pretraining or data augmentation; Parallel corpora on related language pairs can be used via parameter sharing or transfer learning in multilingual models; Subword segmentation and regularization techniques can be applied to ensure high coverage of the vocabulary. We review these approaches in the context of an asymmetric-resource one-to-many translation task, in which the pair of target languages are related, with one being a very low-resource and the other a higher-resource language. We test various methods on three artificially restricted translation tasks -- English to Estonian (low-resource) and Finnish (high-resource), English to Slovak and Czech, English to Danish and Swedish -- and one real-world task, Norwegian to North Sámi and Finnish. The experiments show positive effects especially for scheduled multi-task learning, denoising autoencoder, and subword sampling.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源