In Neural Machine Translation (NMT), each token prediction is conditioned on the source sentence and the target prefix (what has been previously translated at a decoding step). However, previous work on interpretability in NMT has mainly focused solely on source sentence tokens' attributions. Therefore, we lack a full understanding of the influences of every input token (source sentence and target prefix) in the model predictions. In this work, we propose an interpretability method that tracks input tokens' attributions for both contexts. Our method, which can be extended to any encoder-decoder Transformer-based model, allows us to better comprehend the inner workings of current NMT models. We apply the proposed method to both bilingual and multilingual Transformers and present insights into their behaviour.
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
Directly training a document-to-document (Doc2Doc) neural machine translation (NMT) via Transformer from scratch, especially on small datasets usually fails to converge. Our dedicated probing tasks show that 1) both the absolute position and relative position information gets gradually weakened or even vanished once it reaches the upper encoder layers, and 2) the vanishing of absolute position information in encoder output causes the training failure of Doc2Doc NMT. To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention. Specifically, we integrate absolute positional information, i.e., position embeddings, into the query-key pairs both in self-attention and cross-attention through a simple yet effective addition operation. Moreover, we also integrate relative position encoding in self-attention. The proposed P-Transformer utilizes sinusoidal position encoding and does not require any task-specified position embedding, segment embedding, or attention mechanism. Through the above methods, we build a Doc2Doc NMT model with P-Transformer, which ingests the source document and completely generates the target document in a sequence-to-sequence (seq2seq) way. In addition, P-Transformer can be applied to seq2seq-based document-to-sentence (Doc2Sent) and sentence-to-sentence (Sent2Sent) translation. Extensive experimental results of Doc2Doc NMT show that P-Transformer significantly outperforms strong baselines on widely-used 9 document-level datasets in 7 language pairs, covering small-, middle-, and large-scales, and achieves a new state-of-the-art. Experimentation on discourse phenomena shows that our Doc2Doc NMT models improve the translation quality in both BLEU and discourse coherence. We make our code available on Github.
translated by 谷歌翻译
Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads in the encoder to the overall performance of the model and analyze the roles played by them. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L 0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU. 1
translated by 谷歌翻译
最近,多模式机器翻译(MMT)的研究激增,其中其他模式(例如图像)用于提高文本系统的翻译质量。这种多模式系统的特殊用途是同时机器翻译的任务,在该任务中,已证明视觉上下文可以补充源句子提供的部分信息,尤其是在翻译的早期阶段。在本文中,我们提出了第一个基于变压器的同时MMT体系结构,该体系结构以前尚未在现场探索过。此外,我们使用辅助监督信号扩展了该模型,该信号使用标记的短语区域比对来指导其视觉注意机制。我们在三个语言方向上进行全面的实验,并使用自动指标和手动检查进行彻底的定量和定性分析。我们的结果表明,(i)监督视觉注意力一致地提高了MMT模型的翻译质量,并且(ii)通过监督损失对MMT进行微调,比从SCRATCH训练MMT的MMT可以提高性能。与最先进的模型相比,我们提出的模型可实现多达2.3 bleu和3.5 Meteor点的改善。
translated by 谷歌翻译
在完全共享所有语言参数的多语言神经机器翻译模型中,通常使用人工语言令牌来指导转换为所需的目标语言。但是,最近的研究表明,预备语言代币有时无法将多语言神经机器翻译模型导航到正确的翻译方向,尤其是在零弹性翻译上。为了减轻此问题,我们提出了两种方法:语言嵌入实施例和语言意识的多头关注,以学习信息丰富的语言表示,以将翻译转换为正确的方向。前者体现了沿着从源到目标的信息流中的不同关键切换点的语言,旨在放大翻译方向引导信号。后者利用矩阵而不是向量来表示连续空间中的语言。矩阵分为多个头,以学习多个子空间中的语言表示。在两个数据集上进行大规模多语言神经机器翻译的实验结果表明,语言意识到的多头注意力受益于监督和零弹性翻译,并大大减轻了脱靶翻译问题。进一步的语言类型学预测实验表明,通过我们的方法学到的基于基质的语言表示能够捕获丰富的语言类型学特征。
translated by 谷歌翻译
幻觉是一种困扰神经机器翻译的一种病理翻译,最近引起了很多关注。简而言之,幻觉翻译是流利的句子,但与源输入几乎没有关系。可以说,如何发生幻觉仍然是一个开放的问题。在本文中,我们建议使用探测方法从模型架构的角度研究幻觉的原因,旨在避免将来的架构设计中的此类问题。通过对各种NMT数据集进行实验,我们发现幻觉通常伴随着不足的编码器,尤其是嵌入式和脆弱的交叉分离,而有趣的是,跨煽动会减轻编码器引起的一些错误。
translated by 谷歌翻译
我用Hunglish2语料库训练神经电脑翻译任务的模型。这项工作的主要贡献在培训NMT模型期间评估不同的数据增强方法。我提出了5种不同的增强方法,这些方法是结构感知的,这意味着而不是随机选择用于消隐或替换的单词,句子的依赖树用作增强的基础。我首先关于神经网络的详细文献综述,顺序建模,神经机翻译,依赖解析和数据增强。经过详细的探索性数据分析和Hunglish2语料库的预处理之后,我使用所提出的数据增强技术进行实验。匈牙利语的最佳型号达到了33.9的BLEU得分,而英国匈牙利最好的模型达到了28.6的BLEU得分。
translated by 谷歌翻译
机器翻译历史上的重要突破之一是变压器模型的发展。不仅对于各种翻译任务,而且对于大多数其他NLP任务都是革命性的。在本文中,我们针对一个基于变压器的系统,该系统能够将德语用源句子转换为其英语的对应目标句子。我们对WMT'13数据集的新闻评论德语 - 英语并行句子进行实验。此外,我们研究了来自IWSLT'16数据集的培训中包含其他通用域数据以改善变压器模型性能的效果。我们发现,在培训中包括IWSLT'16数据集,有助于在WMT'13数据集的测试集中获得2个BLEU得分点。引入定性分析以分析通用域数据的使用如何有助于提高产生的翻译句子的质量。
translated by 谷歌翻译
我们为神经机翻译(NMT)提供了一个开源工具包。新工具包主要基于拱形变压器(Vaswani等,2017)以及下面详述的许多其他改进,以便创建一个独立的,易于使用,一致和全面的各个领域的机器翻译任务框架。它是为了支持双语和多语言翻译任务的工具,从构建各个语料库的模型开始推断新的预测或将模型打包给提供功能的JIT格式。
translated by 谷歌翻译
最近在单语数据和机器翻译(MT)进行微调的预培训方面取得了成功,但尚不清楚如何最好地利用预先训练的模型来完成给定的MT任务。本文在微调MT上的预训练模型时研究了冻结参数的好处和缺点。我们专注于1)微调仅在英语单语言数据的BART上训练的模型。2)微调一个模型,该模型对25种语言的单语言数据进行了培训,Mbart。对于Bart,我们通过冻结大多数模型参数并添加额外的位置嵌入来获得最佳性能。对于MBART,我们将大多数语言对的天真微调的性能与编码器以及大多数解码器搭配。编码器的注意参数对于微调最重要。当将自己限制为越南人对英语的室外训练套装时,我们看到了基线的最大进步。
translated by 谷歌翻译
多语种NMT已成为MT在生产中部署的有吸引力的解决方案。但是要匹配双语质量,它符合较大且较慢的型号。在这项工作中,我们考虑了几种方法在推理时更快地使多语言NMT变得更快而不会降低其质量。我们在两种20语言多平行设置中尝试几个“光解码器”架构:在TED会谈中小规模和帕拉克曲线上的大规模。我们的实验表明,将具有词汇过滤的浅解码器组合在于,在翻译质量下没有损失的速度超过两倍。我们用Bleu和Chrf(380语言对),鲁棒性评估和人类评估验证了我们的研究结果。
translated by 谷歌翻译
The study of the attention mechanism has sparked interest in many fields, such as language modeling and machine translation. Although its patterns have been exploited to perform different tasks, from neural network understanding to textual alignment, no previous work has analysed the encoder-decoder attention behavior in speech translation (ST) nor used it to improve ST on a specific task. In this paper, we fill this gap by proposing an attention-based policy (EDAtt) for simultaneous ST (SimulST) that is motivated by an analysis of the existing attention relations between audio input and textual output. Its goal is to leverage the encoder-decoder attention scores to guide inference in real time. Results on en->{de, es} show that the EDAtt policy achieves overall better results compared to the SimulST state of the art, especially in terms of computational-aware latency.
translated by 谷歌翻译
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. The rest of the model, which includes an encoder, decoder and attention module, remains unchanged and is shared across all languages. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT using a single model without any increase in parameters, which is significantly simpler than previous proposals for Multilingual NMT. On the WMT'14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-the-art results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT'14 and WMT'15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. In addition to improving the translation quality of language pairs that the model was trained with, our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and show some interesting examples when mixing languages.
translated by 谷歌翻译
同时翻译,它在仅在源句中只收到几个单词后开始翻译每个句子,在许多情况下都具有重要作用。虽然以前的前缀到前缀框架被认为适合同时翻译并实现良好的性能,但它仍然有两个不可避免的缺点:由于需要为每个延迟的单独模型训练单独模型而导致的高计算资源成本$ k $和不足能够编码信息,因为每个目标令牌只能参加特定的源前缀。我们提出了一种新颖的框架,采用简单但有效的解码策略,该策略专为全句型而设计。在此框架内,培训单个全句型模型可以实现任意给出的延迟并节省计算资源。此外,随着全句型模型的能力来编码整个句子,我们的解码策略可以在实时增强在解码状态中保持的信息。实验结果表明,我们的方法在4个方向上的基准方向达到了更好的翻译质量:Zh $ \ lightarrow $ en,en $ \ lightarrow $ ro和en $ \ leftrightarrow $ de。
translated by 谷歌翻译
变压器的令人印象深刻的性能归因于自我注意力,在每个位置都考虑了整个输入之间的依赖性。在这项工作中,我们改革了神经$ n $ gram模型,该模型仅着眼于每个位置的几个周围表示,其多头机制如Vaswani等人(2017年)。通过对序列到序列任务的实验,我们表明,用多头神经$ n $ gram在变压器中替换自我注意力可以比变压器实现可比性或更好的性能。从对我们提出的方法的各种分析中,我们发现多头神经$ n $ gram是互补的,它们的组合可以进一步提高香草变压器的性能。
translated by 谷歌翻译
神经机器翻译(NMT)模型在大型双语数据集上已有效。但是,现有的方法和技术表明,该模型的性能高度取决于培训数据中的示例数量。对于许多语言而言,拥有如此数量的语料库是一个牵强的梦想。我们从单语言词典探索新语言的单语扬声器中汲取灵感,我们研究了双语词典对具有极低或双语语料库的语言的适用性。在本文中,我们使用具有NMT模型的双语词典探索方法,以改善资源极低的资源语言的翻译。我们将此工作扩展到多语言系统,表现出零拍的属性。我们详细介绍了字典质量,培训数据集大小,语言家族等对翻译质量的影响。多种低资源测试语言的结果表明,我们的双语词典方法比基线相比。
translated by 谷歌翻译
神经机翻译(NMT)系统旨在将文本从一种语言映射到另一个语言中。虽然NMT的各种各样的应用,但最重要的是自然语言的翻译。自然语言的显着因素是通常根据给定语言的语法的规则订购单词。虽然在开发用于翻译自然语言的NMT系统方面取得了许多进步,但是在了解源和目标语言之间的词序和词汇相似性如何影响翻译性能时,已经完成了很少的研究。在这里,我们调查来自OpenSubtitles2016数据库的各种低资源语言对的这些关系,其中源语言是英语,并发现目标语言更相似,越多,翻译性能越大。此外,我们在英语序列中研究了提供了NMT模型的影响:为基于变压器的模型,目标语言来自英语,越异常,越异常,而且POS。
translated by 谷歌翻译
最近非自动增加(NAR)机器翻译最近取得了显着的改进,现在优于一些基准测试的自动增加(AR)模型,为AR推断提供有效的替代方案。然而,虽然AR转换通常使用多语言模型来实现,但是从语言之间的转移和改善的服务效率,多语言NAR模型仍然相对未开发。作为一个示例NAR模型和变压器作为半NAR模型,采用连接员时间分类(CTC),我们展示了多语种NAR的全面实证研究。我们在容量限制下对相关语言与负转移之间的积极转移来测试其能力。随着NAR模型需要蒸馏培训套,我们仔细研究双语与多语种教师的影响。最后,我们适合多语言NAR的缩放法,这使得其相对于AR模型的性能随着模型量表的增加而定量。
translated by 谷歌翻译
Multilingual machine translation suffers from negative interference across languages. A common solution is to relax parameter sharing with language-specific modules like adapters. However, adapters of related languages are unable to transfer information, and their total number of parameters becomes prohibitively expensive as the number of languages grows. In this work, we overcome these drawbacks using hyper-adapters -- hyper-networks that generate adapters from language and layer embeddings. While past work had poor results when scaling hyper-networks, we propose a rescaling fix that significantly improves convergence and enables training larger hyper-networks. We find that hyper-adapters are more parameter efficient than regular adapters, reaching the same performance with up to 12 times less parameters. When using the same number of parameters and FLOPS, our approach consistently outperforms regular adapters. Also, hyper-adapters converge faster than alternative approaches and scale better than regular dense networks. Our analysis shows that hyper-adapters learn to encode language relatedness, enabling positive transfer across languages.
translated by 谷歌翻译