在多语言甚至单语言中鉴定的模型的零拍跨语言能力刺激了许多假设,以解释这一有趣的经验结果。但是,由于预处理的成本,大多数研究都使用公共模型的公共模型,其预处理方法(例如代币化,语料库规模和计算预算的选择)可能会大不相同。当研究人员对自己的模型预识时,他们通常会在预算有限的情况下这样做,并且与SOTA模型相比,最终的模型的表现可能明显不足。这些实验差异导致有关这些模型跨语性能力的性质的各种不一致的结论。为了帮助对该主题进行进一步研究,我们发布了10个单语字节级模型,并在相同的配置下进行了严格审慎的概述,并具有大型计算预算(相当于V100的420天)和Corpora,比原始BERT大4倍。由于它们不含令牌,因此消除了看不见的令牌嵌入的问题,从而使研究人员可以在具有不同脚本的语言中尝试更广泛的跨语言实验。此外,我们释放了在不自然语言文本上预测的两个模型,这些模型可用于理智检查实验。关于质量检查和NLI任务的实验表明,我们的单语模型实现了多语言的竞争性能,因此可以加强我们对语言模型中跨语性可传递性的理解。
translated by 谷歌翻译
MARCO排名数据集已广泛用于培训IR任务的深度学习模型,在不同的零射击方案上实现了相当大的效果。但是,这种类型的资源是英语以外的语言的稀缺。在这项工作中,我们呈现MMARCO,MS Marco段落的多语言版本,该数据集包括使用机器翻译创建的13种语言。我们通过微调单语和多语言重新排名模型以及此数据集的密集多语言模型进行了评估。实验结果表明,在我们翻译的数据集上微调微调的多语言模型可以单独对原始英文版的模型进行微调的卓越效果。我们蒸馏的多语言RE-RANKER与非蒸馏模型具有竞争力,而参数较少的5.4倍。最后,我们展现了翻译质量和检索效果之间的正相关性,提供了证据,即翻译方法的改进可能导致多语言信息检索的改进。翻译的数据集和微调模型可在https://github.com/unicamp-dl/mmarco.git上获得。
translated by 谷歌翻译
一种有效的横向传输方法是在一种语言中微调在监督数据集上的双语或多语言模型,并以零拍方式在另一种语言上进行评估。在培训时间或推理时间翻译例子也是可行的替代方案。然而,存在与文献中很少有关的这些方法相关的成本。在这项工作中,我们在其有效性(例如,准确性),开发和部署成本方面分析交叉语言方法,以及推理时间的延迟。我们的三个任务的实验表明最好的交叉方法是高度任务依赖性的。最后,通过结合零射和翻译方法,我们在这项工作中使用的三个数据集中实现了最先进的。基于这些结果,我们对目标语言手动标记的培训数据有所了解。代码和翻译的数据集可在https://github.com/unicamp-dl/cross-lingsual-analysis上获得
translated by 谷歌翻译
While prior work has established that the use of parallel data is conducive for cross-lingual learning, it is unclear if the improvements come from the data itself, or if it is the modeling of parallel interactions that matters. Exploring this, we examine the usage of unsupervised machine translation to generate synthetic parallel data, and compare it to supervised machine translation and gold parallel data. We find that even model generated parallel data can be useful for downstream tasks, in both a general setting (continued pretraining) as well as the task-specific setting (translate-train), although our best results are still obtained using real parallel data. Our findings suggest that existing multilingual models do not exploit the full potential of monolingual data, and prompt the community to reconsider the traditional categorization of cross-lingual learning approaches.
translated by 谷歌翻译
对于多语言序列到序列预审预周序模型(多语言SEQ2SEQ PLM),例如姆巴特(Mbart),自制的预处理任务接受了多种单语言的培训,例如25种来自CommonCrawl的语言,而下游的跨语言任务通常在双语语言子集上进行,例如英语 - 德国人,存在数据差异,即领域的差异,以及跨语言学习客观差异,即在训练和填充阶段之间的任务差异。为了弥合上述跨语言域和任务差距,我们将使用额外的代码切换恢复任务扩展了香草预后管道。具体而言,第一阶段采用自我监督的代码转换还原任务作为借口任务,从而允许多语言SEQ2SEQ PLM获取一些域内对齐信息。在第二阶段,我们正常在下游数据上微调模型。 NLG评估(12个双语翻译任务,30个零射击任务和2项跨语言摘要任务)和NLU评估(7个跨语性自然语言推理任务)的实验表明,我们的模型超过了强大的基线MBART,具有标准的FINETUNNING,这表明了我们的模型策略,一致。分析表明,我们的方法可以缩小跨语性句子表示的欧几里得距离,并通过微不足道的计算成本改善模型概括。我们在:https://github.com/zanchangtong/csr4mbart上发布代码。
translated by 谷歌翻译
Given the impact of language models on the field of Natural Language Processing, a number of Spanish encoder-only masked language models (aka BERTs) have been trained and released. These models were developed either within large projects using very large private corpora or by means of smaller scale academic efforts leveraging freely available data. In this paper we present a comprehensive head-to-head comparison of language models for Spanish with the following results: (i) Previously ignored multilingual models from large companies fare better than monolingual models, substantially changing the evaluation landscape of language models in Spanish; (ii) Results across the monolingual models are not conclusive, with supposedly smaller and inferior models performing competitively. Based on these empirical results, we argue for the need of more research to understand the factors underlying them. In this sense, the effect of corpus size, quality and pre-training techniques need to be further investigated to be able to obtain Spanish monolingual models significantly better than the multilingual ones released by large private companies, specially in the face of rapid ongoing progress in the field. The recent activity in the development of language technology for Spanish is to be welcomed, but our results show that building language models remains an open, resource-heavy problem which requires to marry resources (monetary and/or computational) with the best research expertise and practice.
translated by 谷歌翻译
编码单词语义属性的密集词向量或“Word Embeddings”现在已成为机器翻译(MT),问题应答(QA),字感消解(WSD)和信息检索(IR)中的NLP任务的积分。在本文中,我们使用各种现有方法为14个印度语言创建多个单词嵌入。我们将这些嵌入的嵌入式为所有这些语言,萨姆萨姆,孟加拉,古吉拉蒂,印地教派,kannada,konkani,malayalam,marathi,尼泊尔,odiya,punjabi,梵语,泰米尔和泰雅古士在一个单一的存储库中。相对较新的方法,强调迎合上下文(BERT,ELMO等),表明了显着的改进,但需要大量资源来产生可用模型。我们释放使用上下文和非上下文方法生成的预训练嵌入。我们还使用Muse和XLM来培训所有上述语言的交叉语言嵌入。为了展示我们嵌入的效果,我们为所有这些语言评估了我们对XPOS,UPOS和NER任务的嵌入模型。我们使用8种不同的方法释放了436个型号。我们希望他们对资源受限的印度语言NLP有用。本文的标题是指最初在1924年出版的福斯特的着名小说“一段是印度”。
translated by 谷歌翻译
预先训练的上下文化文本表示模型学习自然语言的有效表示,以使IT机器可以理解。在注意机制的突破之后,已经提出了新一代预磨模的模型,以便自变压器引入以来实现了良好的性能。来自变压器(BERT)的双向编码器表示已成为语言理解的最先进的模型。尽管取得了成功,但大多数可用的型号已经在印度欧洲语言中培训,但是对代表性的语言和方言的类似研究仍然稀疏。在本文中,我们调查了培训基于单语言变换器的语言模型的可行性,以获得代表语言的特定重点是突尼斯方言。我们评估了我们的语言模型对情感分析任务,方言识别任务和阅读理解问答任务。我们表明使用嘈杂的Web爬网数据而不是结构化数据(维基百科,文章等)更方便这些非标准化语言。此外,结果表明,相对小的Web爬网数据集导致与使用较大数据集获得的那些表现相同的性能。最后,我们在所有三个下游任务中达到或改善了最先进的Tunbert模型。我们释放出Tunbert净化模型和用于微调的数据集。
translated by 谷歌翻译
虽然最近关于多语种语言模型的工作已经证明了他们对下游任务的交叉零射击传输的能力,但社区缺乏符合语言之间的共享属性,可以实现这种转移。涉及成对的自然语言的分析通常是不确定的,并且矛盾以来,许多语言方面同时不同。在本文中,我们进行大规模的实证研究,通过测量四种不同的自然语言和通过修改脚本,单词顺序和语法等方面构造的零拍摄传递来隔离各种语言特性的影响。在其他事情之外,我们的实验表明,当语言的单词顺序不同时,缺乏子字重叠显着影响零拍摄传输,并且在语言之间的传输性能和Word嵌入对准之间存在强烈相关性(例如,r = 0.94关于NLI的任务)。我们的结果呼吁专注于在明确改进语言之间的嵌入对齐而不是依赖于隐含的出现。
translated by 谷歌翻译
跨语言嵌入(CLWE)已被证明在许多跨语性任务中有用。但是,大多数现有的学习Clwe的方法,包括具有上下文嵌入的方法是无知的。在这项工作中,我们提出了一个新颖的框架,以通过仅利用双语词典的跨语性信号来使上下文嵌入在感觉层面上。我们通过首先提出一种新颖的感知感知的跨熵损失来明确地提出一种新颖的感知跨熵损失来实现我们的框架。通过感知感知的跨熵损失预算的单语Elmo和BERT模型显示出对单词感官歧义任务的显着改善。然后,我们提出了一个感官对齐目标,除了跨语义模型预训练的感知感知跨熵损失以及几种语言对的跨语义模型(英语对德语/西班牙语/日本/中文)。与最佳的基线结果相比,我们的跨语言模型分别在零摄影,情感分类和XNLI任务上达到0.52%,2.09%和1.29%的平均绩效提高。
translated by 谷歌翻译
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark 1 to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
translated by 谷歌翻译
The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art 100B+ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone. In the process of building BLOOM--the Big Science Large Open-science Open-access Multilingual language model--our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget. Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization. In addition, we study the impact of various popular pre-training corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience .
translated by 谷歌翻译
In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2019) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.
translated by 谷歌翻译
Pre-trained multilingual language models show significant performance gains for zero-shot cross-lingual model transfer on a wide range of natural language understanding (NLU) tasks. Previously, for zero-shot cross-lingual evaluation, pre-trained models are only fine-tuned on English data and tested on a variety of target languages. In this paper, we do cross-lingual evaluation on various NLU tasks (sentence classification, sequence labeling, question answering) using prompt-tuning and compare it with fine-tuning. The results show that prompt tuning achieves much better cross-lingual transfer than fine-tuning across datasets, with only 0.1% to 0.3% tuned parameters. Additionally, we demonstrate through the analysis that prompt tuning can have better cross-lingual transferability of representations on downstream tasks with better aligned decision boundaries.
translated by 谷歌翻译
最近,大型预用语言模型(LMS)越来越受欢迎。培训这些模型需要更多的计算资源,并且大多数现有模型仅在英文文本上培训。以其他语言训练这些模型非常昂贵。为了缓解这个问题,我们介绍了一种叫做威施塞的方法 - 将英语模型传输到新语言。我们将英语模型的销量与目标语言中的销量交换,并初始化令牌嵌入式,以便通过利用覆盖英语和目标语言的多语言静态字嵌入来初始化令牌嵌入式。我们使用Wechsel将GPT-2和Roberta模型转移到4种其他语言(法语,德语,中文和斯瓦希里语)。 Wechsel通过以前提出的跨语言参数转移和优于比较大小的模型来改善从目标语言的划痕训练的相当大小的型号,距离培训速度较小。我们的方法使培训大型语言模型为新语言更容易访问,更少损害环境。我们宣传我们的代码和型号。
translated by 谷歌翻译
与辅助语言的元学习已经表明了对交叉语言自然语言处理的有希望的改进。然而,以前的研究采样使用相同语言的元培训和元测试数据,这限制了模型交叉传输的能力。在本文中,我们提出了XLA-MAML,在元学习阶段执行直接交叉调整。我们对自然语言推理和问题进行零射击和几次拍摄实验。实验结果表明了我们在不同语言,任务和预磨料模型中的方法的有效性。我们还对元学习的各种交叉特定设置进行了分析,包括采样策略和并行性。
translated by 谷歌翻译
Universal cross-lingual sentence embeddings map semantically similar cross-lingual sentences into a shared embedding space. Aligning cross-lingual sentence embeddings usually requires supervised cross-lingual parallel sentences. In this work, we propose mSimCSE, which extends SimCSE to multilingual settings and reveal that contrastive learning on English data can surprisingly learn high-quality universal cross-lingual sentence embeddings without any parallel data. In unsupervised and weakly supervised settings, mSimCSE significantly improves previous sentence embedding methods on cross-lingual retrieval and multilingual STS tasks. The performance of unsupervised mSimCSE is comparable to fully supervised methods in retrieving low-resource languages and multilingual STS. The performance can be further enhanced when cross-lingual NLI data is available. Our code is publicly available at https://github.com/yaushian/mSimCSE.
translated by 谷歌翻译
Multilingual language models (MLMs) acquire valuable, generalizable linguistic information during pretraining and have advanced the state of the art on task-specific finetuning. So far, only ~ 28 out of ~2,000 African languages are covered in existing language models. We ameliorate this limitation by developing SERENGETI, a set of massively multilingual language model that covers 517 African languages and language varieties. We evaluate our novel models on eight natural language understanding tasks across 20 datasets, comparing to four MLMs that each cover any number of African languages. SERENGETI outperforms other models on 11 datasets across the eights tasks and achieves 82.27 average F-1. We also perform error analysis on our models' performance and show the influence of mutual intelligibility when the models are applied under zero-shot settings. We will publicly release our models for research.
translated by 谷歌翻译
在本文中,我们介绍了DOCMT5,这是一种预先培训的多语言序列到序列语言模型,具有大规模并行文档。虽然以前的方法专注于利用句子级并行数据,但我们尝试构建一个可以理解和生成长文件的通用预训练模型。我们提出了一个简单有效的预训练目标 - 文件重新排序机翻译(DRMT),其中需要翻译和屏蔽的输入文件。 DRMT在各种文档级生成任务中对强大基线带来一致的改进,包括超过12个BLEU积分,用于观看语言对文件级MT,超过7个BLEU积分,用于看不见的语言对文件级MT和3胭脂-1位为言语对交叉术概要。我们在WMT20 De-en和IWSLT15 Zh-ZH文档翻译任务中实现了最先进的(SOTA)。我们还对文档预培训的各种因素进行了广泛的分析,包括(1)预培训数据质量的影响和(2)组合单语言和交叉训练的影响。我们计划公开使用我们的模型检查站。
translated by 谷歌翻译
多语言预训练的语言模型(PLM)在高资源和低资源语言的下游任务上表现出令人印象深刻的表现。但是,在预培训期间,尤其是非洲语言中,看不见的语言仍然有很大的表现。适应新语言的最有效方法之一是\ textit {语言自适应微调}(LAFT) - 使用预训练目标对单语言的多语言PLM进行微调。但是,适应目标语言会单独使用大磁盘空间,并限制了由此产生的模型的跨语言转移能力,因为它们已经专门用于单语言。在本文中,我们对17种最重要的非洲语言和其他三种在非洲大陆上广泛使用的高资源语言对17种最具资源的非洲语言进行\ Textit {多语言自适应微调},以鼓励跨语性转移学习。为了进一步专注于多语言PLM,我们从嵌入式层中删除了与MAFT之前的非非洲写作脚本相对应的词汇令牌,从而将模型大小降低了约50%。我们对两个多语言PLM(Afriberta和XLM-R)和三个NLP任务(NER,新闻主题分类和情感分类)的评估表明,我们的方法可以在单个语言上应用LAFT,同时需要较小的磁盘空间。此外,我们表明我们的适应性PLM还提高了参数有效微调方法的零击跨语性转移能力。
translated by 谷歌翻译