本文介绍了我们对Polval 2021任务2的贡献:翻译质量评估指标的评估。我们描述了使用预先训练的语言模型和最新框架进行的实验,以在任务的非盲和盲版中进行翻译质量评估。我们的解决方案在非蓝色版本中排名第二,在盲版中排名第三。
translated by 谷歌翻译
Modern embedding-based metrics for evaluation of generated text generally fall into one of two paradigms: discriminative metrics that are trained to directly predict which outputs are of higher quality according to supervised human annotations, and generative metrics that are trained to evaluate text based on the probabilities of a generative model. Both have their advantages; discriminative metrics are able to directly optimize for the problem of distinguishing between good and bad outputs, while generative metrics can be trained using abundant raw text. In this paper, we present a framework that combines the best of both worlds, using both supervised and unsupervised signals from whatever data we have available. We operationalize this idea by training T5Score, a metric that uses these training signals with mT5 as the backbone. We perform an extensive empirical comparison with other existing metrics on 5 datasets, 19 languages and 280 systems, demonstrating the utility of our method. Experimental results show that: T5Score achieves the best performance on all datasets against existing top-scoring metrics at the segment level. We release our code and models at https://github.com/qinyiwei/T5Score.
translated by 谷歌翻译
End-to-End speech-to-speech translation (S2ST) is generally evaluated with text-based metrics. This means that generated speech has to be automatically transcribed, making the evaluation dependent on the availability and quality of automatic speech recognition (ASR) systems. In this paper, we propose a text-free evaluation metric for end-to-end S2ST, named BLASER, to avoid the dependency on ASR systems. BLASER leverages a multilingual multimodal encoder to directly encode the speech segments for source input, translation output and reference into a shared embedding space and computes a score of the translation quality that can be used as a proxy to human evaluation. To evaluate our approach, we construct training and evaluation sets from more than 40k human annotations covering seven language directions. The best results of BLASER are achieved by training with supervision from human rating scores. We show that when evaluated at the sentence level, BLASER correlates significantly better with human judgment compared to ASR-dependent metrics including ASR-SENTBLEU in all translation directions and ASR-COMET in five of them. Our analysis shows combining speech and text as inputs to BLASER does not increase the correlation with human scores, but best correlations are achieved when using speech, which motivates the goal of our research. Moreover, we show that using ASR for references is detrimental for text-based metrics.
translated by 谷歌翻译
我们假设现有的句子级机器翻译(MT)指标在人类参考包含歧义时会效率降低。为了验证这一假设,我们提出了一种非常简单的方法,用于扩展预审计的指标以在文档级别合并上下文。我们将我们的方法应用于三个流行的指标,即Bertscore,Prism和Comet,以及无参考的公制Comet-QE。我们使用提供的MQM注释评估WMT 2021指标共享任务的扩展指标。我们的结果表明,扩展指标的表现在约85%的测试条件下优于其句子级别的级别,而在排除低质量人类参考的结果时。此外,我们表明我们的文档级扩展大大提高了其对话语现象任务的准确性,从而优于专用基线高达6.1%。我们的实验结果支持我们的初始假设,并表明对指标的简单扩展使他们能够利用上下文来解决参考中的歧义。
translated by 谷歌翻译
翻译质量估计(QE)是预测机器翻译(MT)输出质量的任务,而无需任何参考。作为MT实际应用中的重要组成部分,这项任务已越来越受到关注。在本文中,我们首先提出了XLMRScore,这是一种基于使用XLM-Roberta(XLMR)模型计算的BertScore的简单无监督的QE方法,同时讨论了使用此方法发生的问题。接下来,我们建议两种减轻问题的方法:用未知令牌和预训练模型的跨语性对准替换未翻译的单词,以表示彼此之间的一致性单词。我们在WMT21 QE共享任务的四个低资源语言对上评估了所提出的方法,以及本文介绍的新的英语FARSI测试数据集。实验表明,我们的方法可以在两个零射击方案的监督基线中获得可比的结果,即皮尔森相关性的差异少于0.01,同时在所有低资源语言对中的平均低资源语言对中的无人看管竞争对手的平均水平超过8%的平均水平超过8%。 。
translated by 谷歌翻译
我们为机器翻译(MT)评估发布了70个小鉴别的测试集,称为方差感知测试集(VAT),从WMT16覆盖了35个翻译方向到WMT20竞争。VAT由一种新颖的方差感知过滤方法自动创建,该方法会在没有任何人工的情况下过滤当前MT测试集的不分度测试实例。实验结果表明,VAT在主流语言对和测试集中与人为判断的相关性方面优于原始的WMT测试集。进一步分析增值税的性质揭示了竞争MT系统的具有挑战性的语言特征(例如,低频词和专有名词),为构建未来MT测试集提供指导。测试集和准备方差感知MT测试集的代码可在https://github.com/nlp2ct/variance-aware-mt-test-sets自由使用。
translated by 谷歌翻译
神经指标与机器翻译系统评估中的人类判断达到了令人印象深刻的相关性,但是在我们可以安全地针对此类指标进行优化之前,我们应该意识到(并且理想地消除)偏向获得高分的不良翻译的偏见。我们的实验表明,基于样本的最小贝叶斯风险解码可用于探索和量化此类弱点。在将此策略应用于彗星进行ende和de-en时,我们发现彗星模型不足以差异和命名实体差异。我们进一步表明,通过简单地培训其他合成数据并发布我们的代码和数据以促进进一步的实验,这些偏见很难完全消除。
translated by 谷歌翻译
The rapid growth of machine translation (MT) systems has necessitated comprehensive studies to meta-evaluate evaluation metrics being used, which enables a better selection of metrics that best reflect MT quality. Unfortunately, most of the research focuses on high-resource languages, mainly English, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from English, and to date, there has not been a systematic study of evaluating MT systems from English into Indian languages. In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics. Our results show that pre-trained metrics, such as COMET, have the highest correlations with annotator scores. Additionally, we find that the metrics do not adequately capture fluency-based errors in Indian languages, and there is a need to develop metrics focused on Indian languages. We hope that our dataset and analysis will help promote further research in this area.
translated by 谷歌翻译
质量估计,作为机器翻译的质量控制的关键步骤,多年来已经探讨过。目标是调查估计机器翻译结果的自动方法而无需参考翻译。在今年的WMT QE共享任务中,我们利用了大规模的XLM-Roberta预训练模型,另外提出了几种有用的功能来评估翻译的不确定性,以构建我们的QE系统,命名为\ texit {qemind}。该系统已应用于直接评估的句子级评分任务和严重错误检测的二进制评分预测任务。在本文中,我们向WMT 2021 QE共享任务提供了我们的提交,并且广泛的实验结果表明我们的多语言系统在WMT 2020的直接评估QE任务中表现出最佳系统。
translated by 谷歌翻译
估计机器翻译系统的质量是该领域的研究人员的持续挑战。许多以前使用往返翻译的尝试作为质量的衡量标准失败,并且对其是一种可行的质量估算方法有很大的分歧。在本文中,我们重新审视了往返翻译,提出了一个旨在解决这种方法发现的先前陷阱的系统。我们的方法利用近期语言表示的进步学习,以更准确地衡量原始和往返翻译句子之间的相似性。实验表明,虽然我们的方法没有达到现有技术的当前状态的性能,但它仍然可能是某些语言对的有效方法。
translated by 谷歌翻译
本文介绍了亚当·米基维奇大学(Adam Mickiewicz University)(AMU)提交的《 WMT 2022一般MT任务》的踪迹。我们参加了乌克兰$ \ leftrightarrow $捷克翻译指示。这些系统是基于变压器(大)体系结构的四个模型的加权合奏。模型使用源因素来利用输入中存在的命名实体的信息。合奏中的每个模型仅使用共享任务组织者提供的数据培训。一种嘈杂的反向翻译技术用于增强培训语料库。合奏中的模型之一是文档级模型,该模型在平行和合成的更长序列上训练。在句子级的解码过程中,集合生成了N最佳列表。 n-最佳列表与单个文档级模型生成的n-最佳列表合并,该列表一次翻译了多个句子。最后,使用现有的质量估计模型和最小贝叶斯风险解码来重新列出N最好的列表,因此根据彗星评估指标选择了最佳假设。根据自动评估结果,我们的系统在两个翻译方向上排名第一。
translated by 谷歌翻译
通常需要平行语料库来使用BLEU,流星和Bertscore等指标自动评估翻译质量。尽管基于参考的评估范式被广泛用于许多机器翻译任务中,但由于这些语言遭受了语料库的不足,因此很难将其应用于使用低资源语言的翻译。往返翻译提供了一种令人鼓舞的方法来减轻平行语料库的紧急要求,尽管不幸的是,在统计机器翻译时代,没有观察到与转发翻译相关。在本文中,我们首先观察到,正向翻译质量始终与神经机器翻译范围中相应的往返翻译质量相关。然后,我们仔细分析并揭示了统计机器翻译系统上矛盾结果的原因。其次,我们提出了一种简单而有效的回归方法,以根据各种语言对的往返翻译分数(包括非常低的资源语言之间的往返翻译得分)来预测前向翻译得分的性能。我们进行了广泛的实验,以显示1,000多个语言对的预测模型的有效性和鲁棒性。最后,我们测试了有关挑战性设置的方法,例如预测分数:i)在培训中看不见的语言对,ii)在现实世界中,WMT共享任务但在新领域中。广泛的实验证明了我们方法的鲁棒性和效用。我们相信我们的工作将激发有关非常低资源的多语言机器翻译的工作。
translated by 谷歌翻译
Multilingual Neural Machine Translation (MNMT) models leverage many language pairs during training to improve translation quality for low-resource languages by transferring knowledge from high-resource languages. We study the quality of a domain-adapted MNMT model in the medical domain for English-Romanian with automatic metrics and a human error typology annotation which includes terminology-specific error categories. We compare the out-of-domain MNMT with the in-domain adapted MNMT. The in-domain MNMT model outperforms the out-of-domain MNMT in all measured automatic metrics and produces fewer terminology errors.
translated by 谷歌翻译
We present the task of PreQuEL, Pre-(Quality-Estimation) Learning. A PreQuEL system predicts how well a given sentence will be translated, without recourse to the actual translation, thus eschewing unnecessary resource allocation when translation quality is bound to be low. PreQuEL can be defined relative to a given MT system (e.g., some industry service) or generally relative to the state-of-the-art. From a theoretical perspective, PreQuEL places the focus on the source text, tracing properties, possibly linguistic features, that make a sentence harder to machine translate. We develop a baseline model for the task and analyze its performance. We also develop a data augmentation method (from parallel corpora), that improves results substantially. We show that this augmentation method can improve the performance of the Quality-Estimation task as well. We investigate the properties of the input text that our model is sensitive to, by testing it on challenge sets and different languages. We conclude that it is aware of syntactic and semantic distinctions, and correlates and even over-emphasizes the importance of standard NLP features.
translated by 谷歌翻译
最近提出的基于BERT的评估指标在标准评估基准方面表现良好,但容易受到对抗性攻击的影响,例如与事实错误有关。我们认为这(部分原因)是因为它们是语义相似性的模型。相反,我们根据自然语言推断(NLI)制定评估指标,我们认为这是更合适的建模。我们设计了一个基于偏好的对抗攻击框架,并表明我们的基于NLI的指标比最近基于BERT的指标更强大。在标准基准上,我们的基于NLI的指标的表现优于现有的摘要指标,但在SOTA MT指标下执行。但是,当我们将现有指标与NLI指标相结合时,我们可以获得更高的对抗性鲁棒性( +20%至 +30%)和较高质量的指标,如标准基准测量( +5%至 +25%)。
translated by 谷歌翻译
As machine translation (MT) metrics improve their correlation with human judgement every year, it is crucial to understand the limitations of such metrics at the segment level. Specifically, it is important to investigate metric behaviour when facing accuracy errors in MT because these can have dangerous consequences in certain contexts (e.g., legal, medical). We curate ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. We use ACES to evaluate a wide range of MT metrics including the submissions to the WMT 2022 metrics shared task and perform several analyses leading to general recommendations for metric developers. We recommend: a) combining metrics with different strengths, b) developing metrics that give more weight to the source and less to surface-level overlap with the reference and c) explicitly modelling additional language-specific information beyond what is available via multilingual embeddings.
translated by 谷歌翻译
We present the CUNI-Bergamot submission for the WMT22 General translation task. We compete in English$\rightarrow$Czech direction. Our submission further explores block backtranslation techniques. Compared to the previous work, we measure performance in terms of COMET score and named entities translation accuracy. We evaluate performance of MBR decoding compared to traditional mixed backtranslation training and we show a possible synergy when using both of the techniques simultaneously. The results show that both approaches are effective means of improving translation quality and they yield even better results when combined.
translated by 谷歌翻译
这项工作适用于最低贝叶斯风险(MBR)解码,以优化翻译质量的各种自动化指标。机器翻译中的自动指标最近取得了巨大的进步。特别是,在人类评级(例如BLEurt,或Comet)上微调,在与人类判断的相关性方面是优于表面度量的微调。我们的实验表明,神经翻译模型与神经基于基于神经参考度量,BLEURT的组合导致自动和人类评估的显着改善。通过与经典光束搜索输出不同的翻译获得该改进:这些翻译的可能性较低,并且较少受到Bleu等表面度量的青睐。
translated by 谷歌翻译
Automatic machine translation (MT) metrics are widely used to distinguish the translation qualities of machine translation systems across relatively large test sets (system-level evaluation). However, it is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at the sentence level (segment-level evaluation). In this paper, we investigate how useful MT metrics are at detecting the success of a machine translation component when placed in a larger platform with a downstream task. We evaluate the segment-level performance of the most widely used MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks (dialogue state tracking, question answering, and semantic parsing). For each task, we only have access to a monolingual task-specific model. We calculate the correlation between the metric's ability to predict a good/bad translation with the success/failure on the final task for the Translate-Test setup. Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of the downstream outcomes. We also find that the scores provided by neural metrics are not interpretable mostly because of undefined ranges. Our analysis suggests that future MT metrics be designed to produce error labels rather than scores to facilitate extrinsic evaluation.
translated by 谷歌翻译
在任何翻译工作流程中,从源到目标的域知识保存至关重要。在翻译行业中,接收高度专业化的项目是很常见的,那里几乎没有任何平行的内域数据。在这种情况下,没有足够的内域数据来微调机器翻译(MT)模型,生成与相关上下文一致的翻译很具有挑战性。在这项工作中,我们提出了一种新颖的方法,用于域适应性,以利用最新的审计语言模型(LMS)来用于特定于域的MT的域数据增强,并模拟(a)的(a)小型双语数据集的域特征,或(b)要翻译的单语源文本。将这个想法与反翻译相结合,我们可以为两种用例生成大量的合成双语内域数据。为了进行调查,我们使用最先进的变压器体系结构。我们采用混合的微调来训练模型,从而显着改善了内域文本的翻译。更具体地说,在这两种情况下,我们提出的方法分别在阿拉伯语到英语对阿拉伯语言对上分别提高了大约5-6个BLEU和2-3 BLEU。此外,人类评估的结果证实了自动评估结果。
translated by 谷歌翻译