Multilingual BERT (mBERT) has demonstrated considerable cross-lingual syntactic ability, whereby it enables effective zero-shot cross-lingual transfer of syntactic knowledge. The transfer is more successful between some languages, but it is not well understood what leads to this variation and whether it fairly reflects difference between languages. In this work, we investigate the distributions of grammatical relations induced from mBERT in the context of 24 typologically different languages. We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms. Such difference learnt via self-supervision plays a crucial role in the zero-shot transfer performance and can be predicted by variation in morphosyntactic properties between languages. These results suggest that mBERT properly encodes languages in a way consistent with linguistic diversity and provide insights into the mechanism of cross-lingual transfer.
translated by 谷歌翻译
In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2019) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.
translated by 谷歌翻译
Multi-lingual language models (LM), such as mBERT, XLM-R, mT5, mBART, have been remarkably successful in enabling natural language tasks in low-resource languages through cross-lingual transfer from high-resource ones. In this work, we try to better understand how such models, specifically mT5, transfer *any* linguistic and semantic knowledge across languages, even though no explicit cross-lingual signals are provided during pre-training. Rather, only unannotated texts from each language are presented to the model separately and independently of one another, and the model appears to implicitly learn cross-lingual connections. This raises several questions that motivate our study, such as: Are the cross-lingual connections between every language pair equally strong? What properties of source and target language impact the strength of cross-lingual transfer? Can we quantify the impact of those properties on the cross-lingual transfer? In our investigation, we analyze a pre-trained mT5 to discover the attributes of cross-lingual connections learned by the model. Through a statistical interpretation framework over 90 language pairs across three tasks, we show that transfer performance can be modeled by a few linguistic and data-derived features. These observations enable us to interpret cross-lingual understanding of the mT5 model. Through these observations, one can favorably choose the best source language for a task, and can anticipate its training data demands. A key finding of this work is that similarity of syntax, morphology and phonology are good predictors of cross-lingual transfer, significantly more than just the lexical similarity of languages. For a given language, we are able to predict zero-shot performance, that increases on a logarithmic scale with the number of few-shot target language data points.
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark 1 to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
translated by 谷歌翻译
多语种预训练模型在许多多语言NLP任务中展示了它们的有效性,并使从高资源语言到低资源的零射击或几秒钟传输。然而,由于某种语言之间的显着的类型差异和矛盾,这些模型通常在许多语言和交叉语言设置上表现不佳,这表明了学习单一模型同时处理大规模不同语言的难度。为了减轻这个问题,我们提出了一个新的多语言预训练管道。我们建议从多语言预先训练的模型产生语言表示,并进行语言分析,以表明语言表示相似度反映了从多个角度来看的语言相似度,包括语言家庭,地理蓝星,词汇表演和语法。然后,我们将所有目标语言集成到多个组中,并将每个组名称为表示SprachBund。因此,在同一表示SprachBund中的语言应该在培训和微调中互相提升,因为它们共享丰富的语言相似性。我们预先列车为每个代表斯普拉克班达一个多语言模型。实验在交叉基准上进行,与强基线相比,实现了显着的改进。
translated by 谷歌翻译
虽然审慎的语言模型(PLM)主要用作通用文本编码器,可以对各种下游任务进行微调,但最近的工作表明它们也可以重新连接以产生高质量的单词表示(即静态单词)嵌入)并在类型级词汇任务中产生良好的性能。虽然现有的工作主要集中在单语和双语环境中PLM的词汇专业化,但在这项工作中,我们将大规模多语言变压器(例如MMTS,例如Mbert或XLM-R)公开,以此为大规模的多语言词法知识,并利用Babelnet作为易于获得的丰富来源。多语言和跨语性类型级词汇知识。具体来说,我们利用Babelnet的多语言合成器来创建$ 50 $语言的同义词对,然后对MMTS(Mbert和XLM-R)进行对比目标指导的词汇专业化程序。我们表明,如此庞大的多语言词汇专业化为两项标准的跨语性词汇任务,双语词典感应和跨语性单词相似性以及跨语性句子检索带来了巨大的收益。至关重要的是,我们观察到在专业化中看不见的语言的收益,表明多语言词汇专业化使得概括无词法约束。在一系列随后的受控实验中,我们证明了MMT对专业化语言中单词表示的预处理质量对性能的影响要比一组约束集的语言多样性更大。令人鼓舞的是,这表明涉及低资源语言的词汇任务从资源丰富的语言的词汇知识中受益最大,通常更多。
translated by 谷歌翻译
Pre-trained multilingual language models show significant performance gains for zero-shot cross-lingual model transfer on a wide range of natural language understanding (NLU) tasks. Previously, for zero-shot cross-lingual evaluation, pre-trained models are only fine-tuned on English data and tested on a variety of target languages. In this paper, we do cross-lingual evaluation on various NLU tasks (sentence classification, sequence labeling, question answering) using prompt-tuning and compare it with fine-tuning. The results show that prompt tuning achieves much better cross-lingual transfer than fine-tuning across datasets, with only 0.1% to 0.3% tuned parameters. Additionally, we demonstrate through the analysis that prompt tuning can have better cross-lingual transferability of representations on downstream tasks with better aligned decision boundaries.
translated by 谷歌翻译
最近的工作表明,通过多语种伯爵(MBENT)获得的知识有两个组件:特定于语言和语言中立的。本文分析了它们之间的关系,在两项任务的微调 - POS标记和自然语言推理的背景下 - 需要模型带来不同的语言特异性知识。可视化揭示MBERT失去了在微调后通过语言进行群集表示的能力,这是通过语言识别实验的证据支持的结果。然而,显示使用梯度逆转和迭代对抗性学习的“无学习”语言特定表示的进一步实验,不会在微调的效果之外增加对独立于语言无关的组件的进一步改进。此处提出的结果表明,微调的过程导致模型的重组有限的代表能力,以特定于语言特定的代表性的语言无关的表示。
translated by 谷歌翻译
Universal cross-lingual sentence embeddings map semantically similar cross-lingual sentences into a shared embedding space. Aligning cross-lingual sentence embeddings usually requires supervised cross-lingual parallel sentences. In this work, we propose mSimCSE, which extends SimCSE to multilingual settings and reveal that contrastive learning on English data can surprisingly learn high-quality universal cross-lingual sentence embeddings without any parallel data. In unsupervised and weakly supervised settings, mSimCSE significantly improves previous sentence embedding methods on cross-lingual retrieval and multilingual STS tasks. The performance of unsupervised mSimCSE is comparable to fully supervised methods in retrieving low-resource languages and multilingual STS. The performance can be further enhanced when cross-lingual NLI data is available. Our code is publicly available at https://github.com/yaushian/mSimCSE.
translated by 谷歌翻译
表明多语言语言模型允许跨脚本和语言进行非平凡的转移。在这项工作中,我们研究了能够转移的内部表示的结构。我们将重点放在性别区分作为实际案例研究的表示上,并研究在跨不同语言的共享子空间中编码性别概念的程度。我们的分析表明,性别表示由几个跨语言共享的重要组成部分以及特定于语言的组成部分组成。与语言无关和特定语言的组成部分的存在为我们做出的有趣的经验观察提供了解释:虽然性别分类跨语言良好地传递了跨语言,对性别删除的干预措施,对单一语言进行了培训,但不会轻易转移给其他人。
translated by 谷歌翻译
多语种伯格(M-BERT)中的令牌嵌入式包含语言和语义信息。我们发现,通过简单地平均语言的令牌的嵌入来获得语言的表示。鉴于这种语言表示,我们通过操纵令牌嵌入式来控制多语种倾斜的输出语言,从而实现无监督的令牌翻译。我们进一步提出了一种计算廉价但有效的方法来改善基于该观察的M-BERT的交叉能力。
translated by 谷歌翻译
我们通过纳入通用依赖性(UD)的句法特征来瞄准直接零射击设置中的跨语言机器阅读理解(MRC)的任务,以及我们使用的关键功能是每个句子中的语法关系。虽然以前的工作已经证明了有效的语法引导MRC模型,但我们建议采用句子际句法关系,除了基本的句子关系外,还可以进一步利用MRC任务的多句子输入中的句法依赖性。在我们的方法中,我们构建了句子间依赖图(ISDG)连接依赖树以形成横跨句子的全局句法关系。然后,我们提出了编码全局依赖关系图的ISDG编码器,通过明确地通过一个跳和多跳依赖性路径来解决句子间关系。三个多语言MRC数据集(XQUAD,MLQA,Tydiqa-Goldp)的实验表明,我们仅对英语培训的编码器能够在涵盖8种语言的所有14个测试集中提高零射性能,最高可达3.8 F1 / 5.2 EM平均改善,以及某些语言的5.2 F1 / 11.2 em。进一步的分析表明,改进可以归因于跨语言上一致的句法路径上的注意力。
translated by 谷歌翻译
类型多样的语言提供了词汇和语法方面的系统,使演讲者可以以与他们所面临的特定交流环境和话语约束的方式专注于事件结构的方面。在本文中,我们专门研究了阿拉伯语,中文,德语,德语,俄语和土耳其语的图像标题,并描述了预测词汇方面的计算模型。尽管这些语言具有异质性,以及在其标题语料库中对独特语言资源的显着调用,但这些语言的说话者在框架图像内容的方式方面表现出令人惊讶的相似之处。我们利用这种观察到零拍的跨语性学习,并表明,尽管没有观察到这种语言的任何带注释的数据,但可以预测给定语言的词汇方面。
translated by 谷歌翻译
虽然最近关于多语种语言模型的工作已经证明了他们对下游任务的交叉零射击传输的能力,但社区缺乏符合语言之间的共享属性,可以实现这种转移。涉及成对的自然语言的分析通常是不确定的,并且矛盾以来,许多语言方面同时不同。在本文中,我们进行大规模的实证研究,通过测量四种不同的自然语言和通过修改脚本,单词顺序和语法等方面构造的零拍摄传递来隔离各种语言特性的影响。在其他事情之外,我们的实验表明,当语言的单词顺序不同时,缺乏子字重叠显着影响零拍摄传输,并且在语言之间的传输性能和Word嵌入对准之间存在强烈相关性(例如,r = 0.94关于NLI的任务)。我们的结果呼吁专注于在明确改进语言之间的嵌入对齐而不是依赖于隐含的出现。
translated by 谷歌翻译
虽然对多语言视觉语言预测的模型实现了一些好处,但是当将多句预训练的视力语言模型应用于非英语数据时,各种任务和语言的最新基准测试表明,跨语性概括不佳,并且在有监督之间存在很大的差距( )英语表现和(零射)跨语性转移。在这项工作中,我们探讨了这些模型在零拍的跨语性视觉响应(VQA)任务上的糟糕性能,其中模型在英语视觉问题数据上进行了微调,并对7种类型上多样的语言进行了评估。我们通过三种策略改善了跨语性转移:(1)我们引入了语言的先验目标,以增加基于相似性损失以指导模型在培训期间的跨渗透损失,(2)我们学习了一个特定于任务的子网络,改善跨语性概括并减少不修改模型的方差,(3)我们使用合成代码混合来扩大培训示例,以促进源和目标语言之间的嵌入。我们使用预审计的多语言多模式变压器UC2和M3P进行的XGQA实验证明了针对7种语言提出的微调策略的一致有效性,以稀疏模型优于现有的转移方法。复制我们发现的代码和数据已公开可用。
translated by 谷歌翻译
在这项工作中,我们提出了一个系统的实证研究,专注于最先进的多语言编码器在跨越多种不同语言对的交叉语言文档和句子检索任务的适用性。我们首先将这些模型视为多语言文本编码器,并在无监督的ad-hoc句子和文档级CLIR中基准性能。与监督语言理解相比,我们的结果表明,对于无监督的文档级CLIR - 一个没有针对IR特定的微调 - 预训练的多语言编码器的相关性判断,平均未能基于CLWE显着优于早期模型。对于句子级检索,我们确实获得了最先进的性能:然而,通过多语言编码器来满足高峰分数,这些编码器已经进一步专注于监督的时尚,以便句子理解任务,而不是使用他们的香草'现货'变体。在这些结果之后,我们介绍了文档级CLIR的本地化相关性匹配,在那里我们独立地对文件部分进行了查询。在第二部分中,我们评估了在一系列零拍语言和域转移CLIR实验中的英语相关数据中进行微调的微调编码器精细调整的微调我们的结果表明,监督重新排名很少提高多语言变压器作为无监督的基数。最后,只有在域名对比度微调(即,同一域名,只有语言转移),我们设法提高排名质量。我们在目标语言中单次检索的交叉定向检索结果和结果(零拍摄)交叉传输之间的显着实证差异,这指出了在单机数据上训练的检索模型的“单声道过度装备”。
translated by 谷歌翻译
我们探索无监督模型解释和语法诱导的文本表示的深度聚类。由于这些表示是高维的,因此邮件中的开箱即用的方法不起作用。因此,我们的方法共同将表示转换为较低的聚类友好空间并群集它们。我们考虑了两种语法:在这项工作中的语音感应(POSI)和选区标签(Colab)的一部分。有趣的是,我们发现多语言伯爵(Mbert)包含令人惊讶的英语句法知识;甚至可能和英语伯特(Ebert)一样多。我们的模型可用作无可自由的探针,可使可以是较少偏见的探测方式。我们发现与监督探针相比,无监督探针显示出较高层次的益处。我们进一步注意,我们无监督的探测器使用Ebert和Mbert表示不同,特别是对于POSI。我们通过将其作为无监督的语法感应技术证明其能力来验证探针的功效。通过简单地调整输入表示,我们的探针适用于句法形式主义。我们举报了我们在45标签的英语POSI上探讨的竞争性表现,在10个语言上的12标签POSI上的最先进性能,以及Colab上的竞争结果。我们还对资源贫困语言进行零拍语法归纳,并报告强劲的结果。
translated by 谷歌翻译
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
translated by 谷歌翻译
以前的工作主要侧重于改善NLU任务的交叉传输,具有多语言预用编码器(MPE),或提高与伯特的监督机器翻译的性能。然而,探索了,MPE是否可以有助于促进NMT模型的交叉传递性。在本文中,我们专注于NMT中的零射频转移任务。在此任务中,NMT模型培训,只有一个语言对的并行数据集和搁置架MPE,然后它直接测试在零拍语言对上。我们为此任务提出了Sixt,一个简单而有效的模型。 SIXT利用了两阶段培训计划利用MPE,并进一步改进了解离编码器和容量增强的解码器。使用此方法,SIMPT显着优于MBart,这是一个用于NMT的预磨削的多语言编码器解码器模型,平均改善了14个源语言的零拍摄的任何英语测试集上的7.1 BLEU。此外,培训计算成本和培训数据较少,我们的模型在15个任何英语测试组上实现了比Criss和M2M-100,两个强大的多语言NMT基线更好的性能。
translated by 谷歌翻译