手动构建WordNet是一项艰巨的任务,需要多年的专家时间。作为自动构建完整WordNet的第一步,我们建议使用公开可用的WordNet,机器翻译器和/或单语言词典来生成有关资源丰富和资源贫乏语言的WordNet Synset的方法。我们的算法将现有WordNet的合成器转换为目标语言t,然后在翻译候选者上应用排名方法以查找T中的最佳翻译。我们的方法适用于任何至少有一个从英语翻译到它的现有双语字典的语言。
translated by 谷歌翻译
本文提出的方法是通过单个输入双语词典自动为低资源语言(尤其是资源贫乏的语言)创建大量新的双语词典。我们的算法使用可用的WordNets和Machine Translator(MT)生成了源语言的单词翻译为丰富的目标语言。由于我们的方法仅依赖于一个输入字典,可用的WordNet和MT,因此它们适用于任何双语词典,只要两种语言之一是英语,或者具有链接到Princeton WordNet的WordNet。从5个可用的双语词典开始,我们创建了48个新的双语词典。其中,流行的MTS不支持30双语言:Google和Bing。
translated by 谷歌翻译
本文研究了为濒危语言生成词汇资源的方法。我们的算法使用公共文字网和机器翻译器(MT)构建双语词典和多语言词库。由于我们的作品仅依赖于濒危语言和“中间帮手”语言之间的一个双语词典,因此它适用于缺乏许多现有资源的语言。
translated by 谷歌翻译
双语词典是昂贵的资源,当其中一种语言贫穷时,没有多少可用。在本文中,我们提出了从现有双语词典中创建新的反向双语词典的算法,其中英语是两种语言之一。我们的算法利用了使用英语WordNet产生反向字典条目之间单词概念对之间的相似性。由于我们的算法依赖于可用的双语词典,因此只要两种语言之一具有WordNet型词汇本体论,它们就适用于任何双语词典。
translated by 谷歌翻译
使用基于词典的方法将语言L1中的短语转换为语言L2的过去方法需要语法规则来重组初始翻译。本文引入了一种新颖的方法,而无需使用任何语法规则将L1中不存在的L1中的给定短语转换为L2。我们在L2中至少需要一个L1-L2双语词典和N-Gram数据。我们翻译的平均手动评估得分为4.29/5.00,这意味着非常高质量。
translated by 谷歌翻译
即使在高度发达的国家,多达15-30%的人口只能理解使用基本词汇编写的文本。他们对日常文本的理解是有限的,这阻止了他们在社会中发挥积极作用,并就医疗保健,法律代表或民主选择做出明智的决定。词汇简化是一项自然语言处理任务,旨在通过更简单地替换复杂的词汇和表达方式来使每个人都可以理解文本,同时保留原始含义。在过去的20年中,它引起了极大的关注,并且已经针对各种语言提出了全自动词汇简化系统。该领域进步的主要障碍是缺乏用于构建和评估词汇简化系统的高质量数据集。我们提出了一个新的基准数据集,用于英语,西班牙语和(巴西)葡萄牙语中的词汇简化,并提供有关数据选择和注释程序的详细信息。这是第一个可直接比较三种语言的词汇简化系统的数据集。为了展示数据集的可用性,我们将两种具有不同体系结构(神经与非神经)的最先进的词汇简化系统适应所有三种语言(英语,西班牙语和巴西葡萄牙语),并评估他们的表演在我们的新数据集中。为了进行更公平的比较,我们使用多种评估措施来捕获系统功效的各个方面,并讨论其优势和缺点。我们发现,最先进的神经词汇简化系统优于所有三种语言中最先进的非神经词汇简化系统。更重要的是,我们发现最先进的神经词汇简化系统对英语的表现要比西班牙和葡萄牙语要好得多。
translated by 谷歌翻译
神经机器翻译(NMT)模型在大型双语数据集上已有效。但是,现有的方法和技术表明,该模型的性能高度取决于培训数据中的示例数量。对于许多语言而言,拥有如此数量的语料库是一个牵强的梦想。我们从单语言词典探索新语言的单语扬声器中汲取灵感,我们研究了双语词典对具有极低或双语语料库的语言的适用性。在本文中,我们使用具有NMT模型的双语词典探索方法,以改善资源极低的资源语言的翻译。我们将此工作扩展到多语言系统,表现出零拍的属性。我们详细介绍了字典质量,培训数据集大小,语言家族等对翻译质量的影响。多种低资源测试语言的结果表明,我们的双语词典方法比基线相比。
translated by 谷歌翻译
本文介绍了对土耳其语可用于的语料库和词汇资源的全面调查。我们审查了广泛的资源,重点关注公开可用的资源。除了提供有关可用语言资源的信息外,我们还提供了一组建议,并确定可用于在土耳其语言学和自然语言处理中进行研究和建筑应用的数据中的差距。
translated by 谷歌翻译
This paper describes Meteor Universal, released for the 2014 ACL Workshop on Statistical Machine Translation. Meteor Universal brings language specific evaluation to previously unsupported target languages by (1) automatically extracting linguistic resources (paraphrase tables and function word lists) from the bitext used to train MT systems and (2) using a universal parameter set learned from pooling human judgments of translation quality from several language directions. Meteor Universal is shown to significantly outperform baseline BLEU on two new languages, Russian (WMT13) and Hindi (WMT14).
translated by 谷歌翻译
In the absence of readily available labeled data for a given task and language, annotation projection has been proposed as one of the possible strategies to automatically generate annotated data which may then be used to train supervised systems. Annotation projection has often been formulated as the task of projecting, on parallel corpora, some labels from a source into a target language. In this paper we present T-Projection, a new approach for annotation projection that leverages large pretrained text2text language models and state-of-the-art machine translation technology. T-Projection decomposes the label projection task into two subtasks: (i) The candidate generation step, in which a set of projection candidates using a multilingual T5 model is generated and, (ii) the candidate selection step, in which the candidates are ranked based on translation probabilities. We evaluate our method in three downstream tasks and five different languages. Our results show that T-projection improves the average F1 score of previous methods by more than 8 points.
translated by 谷歌翻译
This report summarizes the work carried out by the authors during the Twelfth Montreal Industrial Problem Solving Workshop, held at Universit\'e de Montr\'eal in August 2022. The team tackled a problem submitted by CBC/Radio-Canada on the theme of Automatic Text Simplification (ATS).
translated by 谷歌翻译
我们介绍了Paranames,这是一种多语言并行名称资源,由1.18亿个名称组成,涉及400种语言。为1360万个实体提供了名称,这些实体映射到标准化实体类型(每/loc/org)。使用Wikidata作为来源,我们创建了此类类型的最大资源。我们描述了我们过滤和标准化数据以提供最佳质量的方法。PANAMES对于多语言语言处理非常有用,既可以定义名称翻译/音译的任务,又可以作为任务的补充数据,例如命名实体识别和链接。我们通过训练与英文和英语的规范名称翻译的多语言模型来展示对照群的应用。我们的资源是根据https://github.com/bltlab/paranames发布的创意共享许可证(CC By 4.0)发布的。
translated by 谷歌翻译
MARCO排名数据集已广泛用于培训IR任务的深度学习模型,在不同的零射击方案上实现了相当大的效果。但是,这种类型的资源是英语以外的语言的稀缺。在这项工作中,我们呈现MMARCO,MS Marco段落的多语言版本,该数据集包括使用机器翻译创建的13种语言。我们通过微调单语和多语言重新排名模型以及此数据集的密集多语言模型进行了评估。实验结果表明,在我们翻译的数据集上微调微调的多语言模型可以单独对原始英文版的模型进行微调的卓越效果。我们蒸馏的多语言RE-RANKER与非蒸馏模型具有竞争力,而参数较少的5.4倍。最后,我们展现了翻译质量和检索效果之间的正相关性,提供了证据,即翻译方法的改进可能导致多语言信息检索的改进。翻译的数据集和微调模型可在https://github.com/unicamp-dl/mmarco.git上获得。
translated by 谷歌翻译
我们对真正低资源语言的神经机翻译(NMT)进行了实证研究,并提出了一个训练课程,适用于缺乏并行培训数据和计算资源的情况,反映了世界上大多数世界语言和研究人员的现实致力于这些语言。以前,已经向低资源语言储存了使用后翻译(BT)和自动编码(AE)任务的无监督NMT。我们证明利用可比的数据和代码切换作为弱监管,与BT和AE目标相结合,即使仅使用适度的计算资源,低资源语言也会显着改进。在这项工作中提出的培训课程实现了Bleu分数,可通过+12.2 Bleu为古吉拉特和+3.7 Bleu为哈萨克斯培训的监督NMT培训,展示了弱势监督的巨大监督态度资源语言。在受到监督数据的培训时,我们的培训课程达到了索马里数据集(索马里29.3的BLEU的最先进的结果)。我们还观察到增加更多时间和GPU来培训可以进一步提高性能,强调报告在MT研究中的报告资源使用的重要性。
translated by 谷歌翻译
As machine translation (MT) metrics improve their correlation with human judgement every year, it is crucial to understand the limitations of such metrics at the segment level. Specifically, it is important to investigate metric behaviour when facing accuracy errors in MT because these can have dangerous consequences in certain contexts (e.g., legal, medical). We curate ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. We use ACES to evaluate a wide range of MT metrics including the submissions to the WMT 2022 metrics shared task and perform several analyses leading to general recommendations for metric developers. We recommend: a) combining metrics with different strengths, b) developing metrics that give more weight to the source and less to surface-level overlap with the reference and c) explicitly modelling additional language-specific information beyond what is available via multilingual embeddings.
translated by 谷歌翻译
The primary obstacle to developing technologies for low-resource languages is the lack of representative, usable data. In this paper, we report the deployment of technology-driven data collection methods for creating a corpus of more than 60,000 translations from Hindi to Gondi, a low-resource vulnerable language spoken by around 2.3 million tribal people in south and central India. During this process, we help expand information access in Gondi across 2 different dimensions (a) The creation of linguistic resources that can be used by the community, such as a dictionary, children's stories, Gondi translations from multiple sources and an Interactive Voice Response (IVR) based mass awareness platform; (b) Enabling its use in the digital domain by developing a Hindi-Gondi machine translation model, which is compressed by nearly 4 times to enable it's edge deployment on low-resource edge devices and in areas of little to no internet connectivity. We also present preliminary evaluations of utilizing the developed machine translation model to provide assistance to volunteers who are involved in collecting more data for the target language. Through these interventions, we not only created a refined and evaluated corpus of 26,240 Hindi-Gondi translations that was used for building the translation model but also engaged nearly 850 community members who can help take Gondi onto the internet.
translated by 谷歌翻译
Machine Translation (MT) system generally aims at automatic representation of source language into target language retaining the originality of context using various Natural Language Processing (NLP) techniques. Among various NLP methods, Statistical Machine Translation(SMT). SMT uses probabilistic and statistical techniques to analyze information and conversion. This paper canvasses about the development of bilingual SMT models for translating English to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset, all 15 languages are briefed with a short description related to our experimental need. Further, a detailed analysis of Samanantar and OPUS dataset for model building, along with standard benchmark dataset (Flores-200) for fine-tuning and testing, is done as a part of our experiment. Different preprocessing approaches are proposed in this paper to handle the noise of the dataset. To create the system, MOSES open-source SMT toolkit is explored. Distance reordering is utilized with the aim to understand the rules of grammar and context-dependent adjustments through a phrase reordering categorization framework. In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES
translated by 谷歌翻译
本文介绍了一个大规模的多模式和多语言数据集,该数据集旨在促进在语言中的上下文使用中对图像进行接地的研究。数据集由选择明确说明在电影字幕句子中表达的概念的图像组成。数据集是一个宝贵的资源,因为(i)图像与文本片段一致,而不是整个句子; (ii)对于文本片段和句子,可以使用多个图像; (iii)这些句子是自由形式和现实世界的; (iv)平行文本是多语言的。我们为人类设置了一个填充游戏,以评估数据集的自动图像选择过程的质量。我们在两个自动任务上显示了数据集的实用程序:(i)填充填充; (ii)词汇翻译。人类评估和自动模型的结果表明,图像可以是文本上下文的有用补充。该数据集将受益于单词视觉基础的研究,尤其是在自由形式句子的背景下,可以从https://doi.org/10.5281/zenodo.5034604获得创意常识许可。
translated by 谷歌翻译
两个关键假设塑造了排名检索的通常视图:(1)搜索者可以为他们希望看到的文档中的疑问选择单词,并且(2)排名检索的文档就足以,因为搜索者将足够就足够了能够认识到他们希望找到的那些。当要搜索的文档处于搜索者未知的语言时,既不是真的。在这种情况下,需要跨语言信息检索(CLIR)。本章审查了艺术技术的交流信息检索,并概述了一些开放的研究问题。
translated by 谷歌翻译
This paper presents the OPUS ecosystem with a focus on the development of open machine translation models and tools, and their integration into end-user applications, development platforms and professional workflows. We discuss our on-going mission of increasing language coverage and translation quality, and also describe on-going work on the development of modular translation models and speed-optimized compact solutions for real-time translation on regular desktops and small devices.
translated by 谷歌翻译