This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT'22 chat translation task for English-German. Based on the Transformer, we apply several effective variants. In our experiments, we utilize the pre-training-then-fine-tuning paradigm. In the first pre-training stage, we employ data filtering and synthetic data generation (i.e., back-translation, forward-translation, and knowledge distillation). In the second fine-tuning stage, we investigate speaker-aware in-domain data generation, speaker adaptation, prompt-based context modeling, target denoising fine-tuning, and boosted self-COMET-based model ensemble. Our systems achieve 0.810 and 0.946 COMET scores. The COMET scores of English-German and German-English are the highest among all submissions.
translated by 谷歌翻译
This paper introduces WeChat's participation in WMT 2022 shared biomedical translation task on Chinese to English. Our systems are based on the Transformer, and use several different Transformer structures to improve the quality of translation. In our experiments, we employ data filtering, data generation, several variants of Transformer, fine-tuning and model ensemble. Our Chinese$\to$English system, named Summer, achieves the highest BLEU score among all submissions.
translated by 谷歌翻译
我们描述了JD Explore Academy对WMT 2022共享的一般翻译任务的提交。我们参加了所有高资源曲目和一条中型曲目,包括中文英语,德语英语,捷克语英语,俄语 - 英语和日语英语。我们通过扩大两个主要因素,即语言对和模型大小,即\ textbf {vega-mt}系统来推动以前的工作的极限 - 进行翻译的双向培训。至于语言对,我们将“双向”扩展到“多向”设置,涵盖所有参与语言,以利用跨语言的常识,并将其转移到下游双语任务中。至于型号尺寸,我们将变压器限制到拥有近47亿参数的极大模型,以完全增强我们VEGA-MT的模型容量。此外,我们采用数据增强策略,例如单语数据的循环翻译以及双语和单语数据的双向自我训练,以全面利用双语和单语言数据。为了使我们的Vega-MT适应通用域测试集,设计了概括调整。根据受约束系统的官方自动分数,根据图1所示的sacrebleu,我们在{zh-en(33.5),en-zh(49.7)(49.7),de-en(33.7)上获得了第一名-de(37.8),CS-EN(54.9),En-CS(41.4)和En-Ru(32.7)},在{ru-en(45.1)和Ja-en(25.6)}和第三名上的第二名和第三名在{en-ja(41.5)}上; W.R.T彗星,我们在{zh-en(45.1),en-zh(61.7),de-en(58.0),en-de(63.2),cs-en(74.7),ru-en(ru-en(ru-en)上,我们获得了第一名64.9),en-ru(69.6)和en-ja(65.1)},分别在{en-cs(95.3)和ja-en(40.6)}上的第二名。将发布模型,以通过GitHub和Omniforce平台来促进MT社区。
translated by 谷歌翻译
本文介绍了我们针对IWSLT 2022离线任务的端到端Yitrans语音翻译系统的提交,该任务从英语音频转换为德语,中文和日语。 Yitrans系统建立在大规模训练的编码器模型上。更具体地说,我们首先设计了多阶段的预训练策略,以建立具有大量标记和未标记数据的多模式模型。然后,我们为下游语音翻译任务微调模型的相应组件。此外,我们做出了各种努力,以提高性能,例如数据过滤,数据增强,语音细分,模型集合等。实验结果表明,我们的Yitrans系统比在三个翻译方向上的强基线取得了显着改进,并且比去年在TST2021英语 - 德国人中的最佳端到端系统方面的改进+5.2 BLEU改进。根据自动评估指标,我们的最终意见在英语 - 德国和英语端到端系统上排名第一。我们使代码和模型公开可用。
translated by 谷歌翻译
本文概述了NVIDIA Nemo的神经电机翻译系统,用于WMT21新闻和生物医学共享翻译任务的受限数据跟踪。我们的新闻任务提交英语 - 德语(EN-DE)和英语 - 俄语(EN-RU)是基于基于基于基线变换器的序列到序列模型之上。具体而言,我们使用1)检查点平均2)模型缩放3)模型缩放3)与从左右分解模型的逆转传播和知识蒸馏的数据增强4)从前一年的测试集上的FINETUNING 5)型号集合6)浅融合解码变压器语言模型和7)嘈杂的频道重新排名。此外,我们的BioMedical任务提交的英语 - 俄语使用生物学偏见的词汇表,并从事新闻任务数据的划痕,从新闻任务数据集中策划的医学相关文本以及共享任务提供的生物医学数据。我们的新闻系统在WMT'20 en-de试验中实现了39.5的Sacrebleu得分优于去年任务38.8的最佳提交。我们的生物医学任务ru-en和en-ru系统分别在WMT'20生物医学任务测试集中达到43.8和40.3的Bleu分数,优于上一年的最佳提交。
translated by 谷歌翻译
Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Targetside monolingual data plays an important role in boosting fluency for phrasebased statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic backtranslation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English↔German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish→English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English→German.
translated by 谷歌翻译
语言之间的大多数翻译任务都属于无法使用的零资源翻译问题。与两种通用枢轴翻译相比,多语言神经机器翻译(MNMT)可以使用所有语言的共享语义空间进行一通翻译,但通常表现不佳的基于枢轴的方法。在本文中,我们提出了一种新颖的方法,称为NMT(UM4)的统一多语言多语言多种教师模型。我们的方法统一了来源教师,目标老师和枢轴教师模型,以指导零资源翻译的学生模型。来源老师和目标教师迫使学生学习直接来源,以通过源头和目标方面的蒸馏知识进行目标翻译。枢轴教师模型进一步利用单语语料库来增强学生模型。实验结果表明,我们的72个方向模型在WMT基准测试上明显优于先前的方法。
translated by 谷歌翻译
Simultaneous machine translation (SiMT) is usually done via sequence-level knowledge distillation (Seq-KD) from a full-sentence neural machine translation (NMT) model. However, there is still a significant performance gap between NMT and SiMT. In this work, we propose to leverage monolingual data to improve SiMT, which trains a SiMT student on the combination of bilingual data and external monolingual data distilled by Seq-KD. Preliminary experiments on En-Zh and En-Ja news domain corpora demonstrate that monolingual data can significantly improve translation quality (e.g., +3.15 BLEU on En-Zh). Inspired by the behavior of human simultaneous interpreters, we propose a novel monolingual sampling strategy for SiMT, considering both chunk length and monotonicity. Experimental results show that our sampling strategy consistently outperforms the random sampling strategy (and other conventional typical NMT monolingual sampling strategies) by avoiding the key problem of SiMT -- hallucination, and has better scalability. We achieve +0.72 BLEU improvements on average against random sampling on En-Zh and En-Ja. Data and codes can be found at https://github.com/hexuandeng/Mono4SiMT.
translated by 谷歌翻译
本文介绍了我们提交给WMT21共享新闻翻译任务的受限轨道。我们专注于三个相对低的资源语言对孟加拉,从印地语,英语往返Hausa,以及来自Zulu的Xhosa。为了克服相对低行数据的限制,我们使用采用并行和单晶体数据的多任务目标训练多语言模型。此外,我们使用后退转换增强数据。我们还培养了一种双语模型,包括后退转换和知识蒸馏,然后使用序列到序列映射来组合两种模型。我们看到迄今为止英语和来自Hausa的Bleu Point的相对收益约为70%,以及与双语基线相比,孟加拉和从Zulu的孟加拉和从Zulu的相对改善约25%。
translated by 谷歌翻译
在任何翻译工作流程中,从源到目标的域知识保存至关重要。在翻译行业中,接收高度专业化的项目是很常见的,那里几乎没有任何平行的内域数据。在这种情况下,没有足够的内域数据来微调机器翻译(MT)模型,生成与相关上下文一致的翻译很具有挑战性。在这项工作中,我们提出了一种新颖的方法,用于域适应性,以利用最新的审计语言模型(LMS)来用于特定于域的MT的域数据增强,并模拟(a)的(a)小型双语数据集的域特征,或(b)要翻译的单语源文本。将这个想法与反翻译相结合,我们可以为两种用例生成大量的合成双语内域数据。为了进行调查,我们使用最先进的变压器体系结构。我们采用混合的微调来训练模型,从而显着改善了内域文本的翻译。更具体地说,在这两种情况下,我们提出的方法分别在阿拉伯语到英语对阿拉伯语言对上分别提高了大约5-6个BLEU和2-3 BLEU。此外,人类评估的结果证实了自动评估结果。
translated by 谷歌翻译
Pre-training is an effective technique for ensuring robust performance on a variety of machine learning tasks. It typically depends on large-scale crawled corpora that can result in toxic or biased models. Such data can also be problematic with respect to copyright, attribution, and privacy. Pre-training with synthetic tasks and data is a promising way of alleviating such concerns since no real-world information is ingested by the model. Our goal in this paper is to understand what makes for a good pre-trained model when using synthetic resources. We answer this question in the context of neural machine translation by considering two novel approaches to translation model pre-training. Our first approach studies the effect of pre-training on obfuscated data derived from a parallel corpus by mapping words to a vocabulary of 'nonsense' tokens. Our second approach explores the effect of pre-training on procedurally generated synthetic parallel data that does not depend on any real human language corpus. Our empirical evaluation on multiple language pairs shows that, to a surprising degree, the benefits of pre-training can be realized even with obfuscated or purely synthetic parallel data. In our analysis, we consider the extent to which obfuscated and synthetic pre-training techniques can be used to mitigate the issue of hallucinated model toxicity.
translated by 谷歌翻译
本报告介绍了在大型多语种计算机翻译中为WMT21共享任务的Microsoft的机器翻译系统。我们参加了所有三种评估轨道,包括大轨道和两个小轨道,前者是无约束的,后两者完全受约束。我们的模型提交到共享任务的初始化用deltalm \脚注{\ url {https://aka.ms/deltalm}},一个通用的预训练的多语言编码器 - 解码器模型,并相应地使用巨大的收集并行进行微调数据和允许的数据源根据轨道设置,以及应用逐步学习和迭代背翻译方法进一步提高性能。我们的最终提交在自动评估度量方面排名第一的三条轨道。
translated by 谷歌翻译
由于当前语法纠错(GEC)任务中缺乏并行数据,基于序列框架的模型不能充分培训以获得更高的性能。我们提出了两个数据合成方法,可以控制误差率和合成数据对误差类型的比率。第一种方法是用固定概率损坏单声道语料库中的每个单词,包括更换,插入和删除。另一种方法是培训误差生成模型并进一步过滤模型的解码结果。对不同合成数据的实验表明,误差率为40%,误差类型的比率相同,可以提高模型性能。最后,我们综合了大约1亿数据并实现了与现有技术的可比性,它使用了我们使用的两倍。
translated by 谷歌翻译
由于包括架构改进和转移学习的效果,AMR Parsing在过去三年中经历了不起起的表现增加。自学习技术也在推动性能方面发挥作用。然而,对于最近的高性能解析器,自学和银数据生成的效果似乎褪色。在本文中,我们表明,通过将基于Spatch的集合技术与集合蒸馏组合来克服这一减少的银数据的递减。在一个广泛的实验设置中,我们首次推出超过85次Spatch以上的单一模型英语解析器性能并返回大量收益。我们还为中国,德语,意大利语和西班牙语进行了跨语态amr解析的新型最先进的。最后,我们探讨了所提出的蒸馏技术对领域适应的影响,并表明它可以产生竞争对QALD-9的人类注释数据的增益,并为生物群体实现新的最先进。
translated by 谷歌翻译
Neural machine translation(NMT) has aroused wide attention due to its impressive quality. Beyond quality, controlling translation styles is also an important demand for many languages. Previous related studies mainly focus on controlling formality and gain some improvements. However, they still face two challenges. The first is the evaluation limitation. Style contains abundant information including lexis, syntax, etc. But only formality is well studied. The second is the heavy reliance on iterative fine-tuning when new styles are required. Correspondingly, this paper contributes in terms of the benchmark and approach. First, we re-visit this task and propose a multiway stylized machine translation (MSMT) benchmark, which includes multiple categories of styles in four language directions to push the boundary of this task. Second, we propose a method named style activation prompt (StyleAP) by retrieving prompts from stylized monolingual corpus, which needs no extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance. All of our data and code are released at https://github.com/IvanWang0730/StyleAP.
translated by 谷歌翻译
This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks. We present mBART -a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective . mBART is the first method for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. Pre-training a complete model allows it to be directly fine tuned for supervised (both sentence-level and document-level) and unsupervised machine translation, with no task-specific modifications. We demonstrate that adding mBART initialization produces performance gains in all but the highest-resource settings, including up to 12 BLEU points for low resource MT and over 5 BLEU points for many document-level and unsupervised models. We also show it also enables new types of transfer to language pairs with no bi-text or that were not in the pre-training corpus, and present extensive analysis of which factors contribute the most to effective pre-training.
translated by 谷歌翻译
GPT-2和BERT展示了在各种自然语言处理任务上使用预训练的语言模型(LMS)的有效性。但是,在应用于资源丰富的任务时,LM微调通常会遭受灾难性的遗忘。在这项工作中,我们引入了一个协同的培训框架(CTNMT),该框架是将预训练的LMS集成到神经机器翻译(NMT)的关键。我们提出的CTNMT包括三种技术:a)渐近蒸馏,以确保NMT模型可以保留先前的预训练知识; b)动态的开关门,以避免灾难性忘记预训练的知识; c)根据计划的政策调整学习步伐的策略。我们在机器翻译中的实验表明,WMT14英语 - 德语对的CTNMT获得了最高3个BLEU得分,甚至超过了先前的最先进的预培训辅助NMT NMT的NMT。尽管对于大型WMT14英语法国任务,有400万句话,但我们的基本模型仍然可以显着改善最先进的变压器大型模型,超过1个BLEU得分。代码和模型可以从https://github.com/bytedance/neurst/tree/Master/Master/examples/ctnmt下载。
translated by 谷歌翻译
We report the result of the first edition of the WMT shared task on Translation Suggestion (TS). The task aims to provide alternatives for specific words or phrases given the entire documents generated by machine translation (MT). It consists two sub-tasks, namely, the naive translation suggestion and translation suggestion with hints. The main difference is that some hints are provided in sub-task two, therefore, it is easier for the model to generate more accurate suggestions. For sub-task one, we provide the corpus for the language pairs English-German and English-Chinese. And only English-Chinese corpus is provided for the sub-task two. We received 92 submissions from 5 participating teams in sub-task one and 6 submissions for the sub-task 2, most of them covering all of the translation directions. We used the automatic metric BLEU for evaluating the performance of each submission.
translated by 谷歌翻译
随着互联网和智能手机的广泛影响,电子商务平台的用户群越来越多。由于本地语言用户的英语不是熟悉的,因此他们首选的浏览模式是他们的区域语言或区域语言和英语的组合。从我们最近关于查询数据的研究中,我们注意到我们收到的许多查询都是代码混合物,特别是hinglish,即用英语(拉丁)脚本写的一个或多个印地语单词的查询。我们为代码混合查询转换提出了一种基于变压器的方法,以使用户可以通过这些查询进行搜索。我们证明了在该任务上未标记的英语文本的大型语料库中训练的预训练的编码模型的有效性。使用通用域翻译模型,我们创建了一个伪标记的数据集,用于培训有关搜索查询的模型,并验证了各种数据增强技术的有效性。此外,为了减少模型的延迟,我们使用知识蒸馏和权重量化。该方法的有效性已通过实验评估和A/B测试验证。该模型目前在Flipkart应用程序和网站上直播,可供数百万个查询。
translated by 谷歌翻译
机器翻译(MT)的单词级质量估计(QE)旨在在不参考的情况下找出翻译句子中的潜在翻译错误。通常,关于文字级别量化宽松的传统作品旨在根据文章编辑工作来预测翻译质量,其中通过比较MT句子之间的单词来自动生成单词标签(“ OK”和“ BAD”)。通过翻译错误率(TER)工具包编辑的句子。虽然可以使用后编辑的工作来在一定程度上测量翻译质量,但我们发现它通常与人类对单词是否良好或翻译不良的判断相抵触。为了克服限制,我们首先创建了一个金色基准数据集,即\ emph {hjqe}(人类对质量估计的判断),专家翻译直接注释了对其判断的不良翻译单词。此外,为了进一步利用平行语料库,我们提出了使用两个标签校正策略的自我监督的预训练,即标记改进策略和基于树的注释策略,以使基于TER的人工量化量子ceper更接近\ emph {HJQE}。我们根据公开可用的WMT en-de和en-ZH Corpora进行实质性实验。结果不仅表明我们提出的数据集与人类的判断更加一致,而且还确认了提议的标签纠正策略的有效性。 。}
translated by 谷歌翻译