使用变压器模型,多语言神经机器的翻译一直显示出巨大的成功。部署这些模型是具有挑战性的,因为它们通常需要各种语言的大词汇(词汇)尺寸。这限制了在上一个词汇投影层中预测输出令牌的速度。为了减轻这些挑战,本文提出了一种通过聚类的快速词汇投影方法,该方法可用于GPU上的多语言变压器。首先,我们脱机将词汇搜索空间分为不同的结合群,鉴于解码器输出的隐藏上下文向量,这导致词汇投影的词汇列要小得多。其次,在推理时,提出的方法预测了词汇投影中隐藏上下文向量的簇和候选候选代币。本文还包括对在多语言环境中构建这些群集的不同方式的分析。我们的结果表明,FLOAT16 GPU推断中的端到端速度增长高达25%,同时保持BLEU得分并略有增加记忆成本。所提出的方法将词汇投影步骤加速自身最多2.6倍。我们还进行了广泛的人类评估,以验证所提出的方法保留了原始模型的翻译质量。
translated by 谷歌翻译
多语种NMT已成为MT在生产中部署的有吸引力的解决方案。但是要匹配双语质量,它符合较大且较慢的型号。在这项工作中,我们考虑了几种方法在推理时更快地使多语言NMT变得更快而不会降低其质量。我们在两种20语言多平行设置中尝试几个“光解码器”架构:在TED会谈中小规模和帕拉克曲线上的大规模。我们的实验表明,将具有词汇过滤的浅解码器组合在于,在翻译质量下没有损失的速度超过两倍。我们用Bleu和Chrf(380语言对),鲁棒性评估和人类评估验证了我们的研究结果。
translated by 谷歌翻译
基于$ K $ NN的神经电机翻译($ K $ NN-MT)已经实现了最先进的MT任务。 $ k $ nn-mt的一个重要缺点在于识别来自整个数据存储的查询表示的$ k $最近邻居的效率低下,这在数据存储大小大的情况下是毫无疑问的。在这项工作中,我们提出\ TextBF {更快$ k $ nn-mt}来解决这个问题。更快的k $ nn-mt的核心思想是使用分层聚类策略来近似数据存储区中的查询和数据点之间的距离,该数据点被分解为两个部分:查询与中心之间的距离群集数据点属于,以及数据点与群集中心之间的距离。我们提出了实际的方法来以明显更快的方式计算这两个部分。通过对不同的MT基准测试的大量实验,我们展示了\ TextBF {更快$ K $ NN-MT}速度快于Fast $ K $ NN-MT \ CITEP {Meng2021Fast},只略微(1.2次)比其香草对应物慢保持模型性能为$ k $ nn-mt。更快$ k $ nn-mt,可以在现实世界MT服务上部署$ K $ NN-MT模型。
translated by 谷歌翻译
本文提出了一种简单而有效的方法,可以改善两种情况下的直接(x-to-y)翻译:零射击和直接数据时。我们将编码器和解码器的输入令牌修改为包括源和目标语言的信号。我们在从头开始训练或使用拟议的设置对验证模型进行填充时显示出绩效增长。在实验中,根据检查点选择标准,我们的方法在内部数据集上显示了近10.0个BLEU点的增益。在WMT评估活动中,从英语性能提高了4.17和2.87 BLEU点,在零射击设置和直接数据可用于培训时。而X-to-y在零射基线上提高了1.29 BLEU,而在多到许多基线上提高了0.44。在低资源设置中,我们在X-TO-Y域数据上进行填充时会看到1.5〜1.7点的改善。
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
Sockeye 3是神经机器翻译(NMT)的Mockeye工具包的最新版本。现在,基于Pytorch,Sockeye 3提供了更快的模型实现和更高级的功能,并具有进一步的简化代码库。这可以通过更快的迭代,对更强大,更快的模型进行有效的培训以及快速从研究转移到生产的新想法的灵活性,从而实现更广泛的实验。当运行可比较的型号时,Sockeye 3的速度比GPU上的其他Pytorch实现快126%,在CPU上的实现速度高达292%。Sockeye 3是根据Apache 2.0许可发布的开源软件。
translated by 谷歌翻译
稀疏激活的变压器(例如专家的混合物(MOE))由于其极端的缩放能力而引起了极大的兴趣,这可以使模型大小的急剧增加而没有大幅增加计算成本。为了实现这一目标,MOE模型用变压器中的Experts子层取代了前馈子层,并使用门控网络将每个令牌路由到其指定的专家。由于对此类模型进行有效培训的共同实践需要在不同的机器上分发专家和代币,因此这种路由策略通常会产生巨大的跨机器通信成本,因为代币及其分配的专家可能居住在不同的机器中。在本文中,我们提出了\ emph {门控辍学},它允许代币忽略门控网络并留在其本地机器,从而减少了交叉机器的通信。与传统辍学类似,我们还表明,门控辍学在训练过程中具有正规化效果,从而改善了概括性能。我们验证了对多语言机器翻译任务中门控辍学的有效性。我们的结果表明,门控辍学可改善具有更快的壁式时间收敛速率的最先进的MOE模型,并为各种模型尺寸和数据集提供更好的BLEU分数。
translated by 谷歌翻译
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. The rest of the model, which includes an encoder, decoder and attention module, remains unchanged and is shared across all languages. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT using a single model without any increase in parameters, which is significantly simpler than previous proposals for Multilingual NMT. On the WMT'14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-the-art results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT'14 and WMT'15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. In addition to improving the translation quality of language pairs that the model was trained with, our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and show some interesting examples when mixing languages.
translated by 谷歌翻译
Compared to conventional bilingual translation systems, massively multilingual machine translation is appealing because a single model can translate into multiple languages and benefit from knowledge transfer for low resource languages. On the other hand, massively multilingual models suffer from the curse of multilinguality, unless scaling their size massively, which increases their training and inference costs. Sparse Mixture-of-Experts models are a way to drastically increase model capacity without the need for a proportional amount of computing. The recently released NLLB-200 is an example of such a model. It covers 202 languages but requires at least four 32GB GPUs just for inference. In this work, we propose a pruning method that allows the removal of up to 80\% of experts with a negligible loss in translation quality, which makes it feasible to run the model on a single 32GB GPU. Further analysis suggests that our pruning metrics allow to identify language-specific experts and prune non-relevant experts for a given language pair.
translated by 谷歌翻译
Reranking methods in machine translation aim to close the gap between common evaluation metrics (e.g. BLEU) and maximum likelihood learning and decoding algorithms. Prior works address this challenge by training models to rerank beam search candidates according to their predicted BLEU scores, building upon large models pretrained on massive monolingual corpora -- a privilege that was never made available to the baseline translation model. In this work, we examine a simple approach for training rerankers to predict translation candidates' BLEU scores without introducing additional data or parameters. Our approach can be used as a clean baseline, decoupled from external factors, for future research in this area.
translated by 谷歌翻译
本文介绍了一种新的数据增强方法,用于神经机器翻译,该方法可以在语言内部和跨语言内部实施更强的语义一致性。我们的方法基于条件掩盖语言模型(CMLM),该模型是双向的,可以在左右上下文以及标签上有条件。我们证明CMLM是生成上下文依赖性单词分布的好技术。特别是,我们表明CMLM能够通过在替换过程中对源和目标进行调节来实现语义一致性。此外,为了增强多样性,我们将软词替换的想法纳入了数据增强,该概念用词汇上的概率分布代替了一个单词。在不同量表的四个翻译数据集上进行的实验表明,总体解决方案会导致更现实的数据增强和更好的翻译质量。与最新作品相比,我们的方法始终取得了最佳性能,并且在基线上的提高了1.90个BLEU点。
translated by 谷歌翻译
在完全共享所有语言参数的多语言神经机器翻译模型中,通常使用人工语言令牌来指导转换为所需的目标语言。但是,最近的研究表明,预备语言代币有时无法将多语言神经机器翻译模型导航到正确的翻译方向,尤其是在零弹性翻译上。为了减轻此问题,我们提出了两种方法:语言嵌入实施例和语言意识的多头关注,以学习信息丰富的语言表示,以将翻译转换为正确的方向。前者体现了沿着从源到目标的信息流中的不同关键切换点的语言,旨在放大翻译方向引导信号。后者利用矩阵而不是向量来表示连续空间中的语言。矩阵分为多个头,以学习多个子空间中的语言表示。在两个数据集上进行大规模多语言神经机器翻译的实验结果表明,语言意识到的多头注意力受益于监督和零弹性翻译,并大大减轻了脱靶翻译问题。进一步的语言类型学预测实验表明,通过我们的方法学到的基于基质的语言表示能够捕获丰富的语言类型学特征。
translated by 谷歌翻译
不断增长的数据量导致更大的通用模型。通常遗漏特定用例,因为通用模型在域特定情况下往往表现不佳。我们的工作通过用于从通用域(并行文本)语料库的域名数据的方法解决了这个差距,用于机器翻译的任务。所提出的方法根据具有单孔域的特定数据集的余弦相似度在并行通用域数据中排列句子。然后,我们选择具有最高相似性分数的顶级k句,以培训调整的新机器翻译系统到特定的域数据。我们的实验结果表明,在通用或通用和域数据的混合训练的域内训练的模型训练的模型。也就是说,我们的方法以低计算成本和数据大小选择高质量的域特定培训实例。
translated by 谷歌翻译
Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference -sometimes prohibitively so in the case of very large data sets and large models. Several authors have also charged that NMT systems lack robustness, particularly when input sentences contain rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using residual connections as well as attention connections from the decoder network to the encoder. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units ("wordpieces") for both input and output. This method provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. To directly optimize the translation BLEU scores, we consider refining the models by using reinforcement learning, but we found that the improvement in the BLEU scores did not reflect in the human evaluation. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google's phrase-based production system.
translated by 谷歌翻译
我们介绍Samanantar,是最大的公开可用的并行Corpora Collection,用于指示语言。该集合中的英语和11个上线语言之间总共包含4970万句对(来自两种语言系列)。具体而言,我们从现有的公共可用并行基层编译1240万句对,另外,从网络上挖掘3740万句对,导致4倍增加。我们通过组合许多语料库,工具和方法来挖掘网站的并行句子:(a)Web爬行单格式语料库,(b)文档OCR,用于从扫描的文档中提取句子,(c)用于对齐句子的多语言表示模型,以及(d)近似最近的邻居搜索搜索大量句子。人类评估新矿业的Corpora的样本验证了11种语言的高质量平行句子。此外,我们使用英语作为枢轴语言,从英式并行语料库中提取所有55个指示语言对之间的834百万句子对。我们培训了跨越Samanantar上所有这些语言的多语种NMT模型,这在公开可用的基准上表现出现有的模型和基准,例如弗洛雷斯,建立萨曼塔尔的效用。我们的数据和模型可在Https://indicnlp.ai4bharat.org/samanantar/上公开提供,我们希望他们能够帮助推进NMT和Multibingual NLP的研究。
translated by 谷歌翻译
这项工作适用于最低贝叶斯风险(MBR)解码,以优化翻译质量的各种自动化指标。机器翻译中的自动指标最近取得了巨大的进步。特别是,在人类评级(例如BLEurt,或Comet)上微调,在与人类判断的相关性方面是优于表面度量的微调。我们的实验表明,神经翻译模型与神经基于基于神经参考度量,BLEURT的组合导致自动和人类评估的显着改善。通过与经典光束搜索输出不同的翻译获得该改进:这些翻译的可能性较低,并且较少受到Bleu等表面度量的青睐。
translated by 谷歌翻译
多语种神经机翻译(MNMT)旨在通过单一模型进行翻译多种语言,并且由于具有共享参数的不同语言的有效知识传输,已被证明是成功的。但是,它仍然是一个开放的问题,应该共享哪些参数,并且需要是特定于任务的。目前,常识是启发式设计或搜索特定语言的模块,这很难找到最佳配置。在本文中,我们提出了一种基于新的参数差异化方法,允许模型确定在训练期间应该是哪个参数。灵感来自蜂窝分化,我们方法中的每个共享参数都可以动态区分为更专业化的类型。我们还将差分标准定义为任务间梯度相似性。因此,突出的任务渐变间的参数更可能是特定于语言的。关于多语言数据集的大量实验表明,我们的方法显着优于不同参数共享配置的各种强基线。进一步的分析表明,通过我们的方法获得的参数共享配置与语言近似度很好地相关。
translated by 谷歌翻译
We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. The mined data and models are freely available.
translated by 谷歌翻译
我们探索自然语言生成(NLG)的有效评估指标。为了实施高效的指标,我们用较轻的版本(例如蒸馏器)和(ii)立方推理时间对齐算法等较轻的版本(例如蒸馏器)替换(i)诸如bertscore,moverscore,bartscore,xmoverscore等指标中的计算重型变压器等。线性和二次近似值的距离。我们考虑六个评估指标(单语和多语言),在三个不同的机器翻译数据集上进行了评估,并考虑了16个轻量级变压器作为替换。我们发现,(a)Tinybert在Bertscore家族的语义相似性指标上表现出最优质的效率折衷,保留了97 \%的质量,平均在推理时快5倍,(b)有很大的差异。 CPU与GPU的加速度(CPU上的更高速度)以及(c)WMD近似没有效率提高,但在我们检查的3个数据集中有2个数据集中有2个质量下降。
translated by 谷歌翻译
Multilingual pretrained models are effective for machine translation and cross-lingual processing because they contain multiple languages in one model. However, they are pretrained after their tokenizers are fixed; therefore it is difficult to change the vocabulary after pretraining. When we extend the pretrained models to new languages, we must modify the tokenizers simultaneously. In this paper, we add new subwords to the SentencePiece tokenizer to apply a multilingual pretrained model to new languages (Inuktitut in this paper). In our experiments, we segmented Inuktitut sentences into subwords without changing the segmentation of already pretrained languages, and applied the mBART-50 pretrained model to English-Inuktitut translation.
translated by 谷歌翻译