This position paper discusses the problem of multilingual evaluation. Using simple statistics, such as average language performance, might inject linguistic biases in favor of dominant language families into evaluation methodology. We argue that a qualitative analysis informed by comparative linguistics is needed for multilingual results to detect this kind of bias. We show in our case study that results in published works can indeed be linguistically biased and we demonstrate that visualization based on URIEL typological database can detect it.
translated by 谷歌翻译
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark 1 to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
translated by 谷歌翻译
Multilingual BERT (mBERT) has demonstrated considerable cross-lingual syntactic ability, whereby it enables effective zero-shot cross-lingual transfer of syntactic knowledge. The transfer is more successful between some languages, but it is not well understood what leads to this variation and whether it fairly reflects difference between languages. In this work, we investigate the distributions of grammatical relations induced from mBERT in the context of 24 typologically different languages. We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms. Such difference learnt via self-supervision plays a crucial role in the zero-shot transfer performance and can be predicted by variation in morphosyntactic properties between languages. These results suggest that mBERT properly encodes languages in a way consistent with linguistic diversity and provide insights into the mechanism of cross-lingual transfer.
translated by 谷歌翻译
原型NLP实验训练了标记为英语数据的标准体系结构,并优化了准确性,而无需考虑其他方面,例如公平,解释性或计算效率。我们通过最近对NLP研究论文的手动分类表明,确实是这种情况,并将其称为正方形的实验设置。我们观察到,NLP研究通常超出了一个平方的设置,例如,不仅关注准确性,而且关注公平或解释性,而且通常仅沿着单个维度。例如,针对多语言的大多数工作仅考虑准确性;大多数关于公平或解释性的工作仅考虑英语;等等。我们通过对最近的NLP研究论文和ACL测试奖励获得者的手动分类来展示此信息。大多数研究的这种一维意味着我们只探索NLP研究搜索空间的一部分。我们提供了一个历史和最新示例,说明了一个偏见如何导致研究人员得出错误的结论或做出不明智的选择,指出了在研究歧管上有希望但未开发的方向,并提出实用的建议以实现更多的多维研究。我们打开注释的结果,以启用https://github.com/google-research/url-nlp的进一步分析
translated by 谷歌翻译
随着语言技术变得更加无处不在,越来越努力扩大自然语言处理(NLP)系统的语言分集和覆盖范围。可以说,影响现代NLP系统质量的最重要因素是数据可用性。在这项工作中,我们研究了NLP数据集的地理代表性,旨在量化NLP数据集与语言扬声器的预期需求量化。在这样做时,我们使用实体识别和链接系统,同时对其交叉量度的一致性进行重要观察,并为更强大的评估提供建议。最后,我们探讨了可能解释观察到的数据集发行版的一些地理和经济因素。此处提供代码和数据:https://github.com/ffaisal93/dataset_geography。此处提供其他可视化:https://nlp.cs.gmu.edu/project/datasetmaps/。
translated by 谷歌翻译
JamPatoisNLI provides the first dataset for natural language inference in a creole language, Jamaican Patois. Many of the most-spoken low-resource languages are creoles. These languages commonly have a lexicon derived from a major world language and a distinctive grammar reflecting the languages of the original speakers and the process of language birth by creolization. This gives them a distinctive place in exploring the effectiveness of transfer from large monolingual or multilingual pretrained models. While our work, along with previous work, shows that transfer from these models to low-resource languages that are unrelated to languages in their training set is not very effective, we would expect stronger results from transfer to creoles. Indeed, our experiments show considerably better results from few-shot learning of JamPatoisNLI than for such unrelated languages, and help us begin to understand how the unique relationship between creoles and their high-resource base languages affect cross-lingual transfer. JamPatoisNLI, which consists of naturally-occurring premises and expert-written hypotheses, is a step towards steering research into a traditionally underserved language and a useful benchmark for understanding cross-lingual NLP.
translated by 谷歌翻译
Providing better language tools for low-resource and endangered languages is imperative for equitable growth. Recent progress with massively multilingual pretrained models has proven surprisingly effective at performing zero-shot transfer to a wide variety of languages. However, this transfer is not universal, with many languages not currently understood by multilingual approaches. It is estimated that only 72 languages possess a "small set of labeled datasets" on which we could test a model's performance, the vast majority of languages not having the resources available to simply evaluate performances on. In this work, we attempt to clarify which languages do and do not currently benefit from such transfer. To that end, we develop a general approach that requires only unlabelled text to detect which languages are not well understood by a cross-lingual model. Our approach is derived from the hypothesis that if a model's understanding is insensitive to perturbations to text in a language, it is likely to have a limited understanding of that language. We construct a cross-lingual sentence similarity task to evaluate our approach empirically on 350, primarily low-resource, languages.
translated by 谷歌翻译
编码单词语义属性的密集词向量或“Word Embeddings”现在已成为机器翻译(MT),问题应答(QA),字感消解(WSD)和信息检索(IR)中的NLP任务的积分。在本文中,我们使用各种现有方法为14个印度语言创建多个单词嵌入。我们将这些嵌入的嵌入式为所有这些语言,萨姆萨姆,孟加拉,古吉拉蒂,印地教派,kannada,konkani,malayalam,marathi,尼泊尔,odiya,punjabi,梵语,泰米尔和泰雅古士在一个单一的存储库中。相对较新的方法,强调迎合上下文(BERT,ELMO等),表明了显着的改进,但需要大量资源来产生可用模型。我们释放使用上下文和非上下文方法生成的预训练嵌入。我们还使用Muse和XLM来培训所有上述语言的交叉语言嵌入。为了展示我们嵌入的效果,我们为所有这些语言评估了我们对XPOS,UPOS和NER任务的嵌入模型。我们使用8种不同的方法释放了436个型号。我们希望他们对资源受限的印度语言NLP有用。本文的标题是指最初在1924年出版的福斯特的着名小说“一段是印度”。
translated by 谷歌翻译
Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies. In this paper, we propose the Prompts Augmented by Retrieval Crosslingually (PARC) pipeline to improve the zero-shot performance on low-resource languages (LRLs) by augmenting the context with semantically similar sentences retrieved from a high-resource language (HRL) as prompts. PARC improves the zero-shot performance on three downstream tasks (binary sentiment classification, topic categorization and natural language inference) with multilingual parallel test sets across 10 LRLs covering 6 language families in both unlabeled settings (+5.1%) and labeled settings (+16.3%). PARC-labeled also outperforms the finetuning baseline by 3.7%. We find a significant positive correlation between cross-lingual transfer performance on one side, and the similarity between the high- and low-resource languages as well as the amount of low-resource pretraining data on the other side. A robustness analysis suggests that PARC has the potential to achieve even stronger performance with more powerful MPLMs.
translated by 谷歌翻译
Given the impact of language models on the field of Natural Language Processing, a number of Spanish encoder-only masked language models (aka BERTs) have been trained and released. These models were developed either within large projects using very large private corpora or by means of smaller scale academic efforts leveraging freely available data. In this paper we present a comprehensive head-to-head comparison of language models for Spanish with the following results: (i) Previously ignored multilingual models from large companies fare better than monolingual models, substantially changing the evaluation landscape of language models in Spanish; (ii) Results across the monolingual models are not conclusive, with supposedly smaller and inferior models performing competitively. Based on these empirical results, we argue for the need of more research to understand the factors underlying them. In this sense, the effect of corpus size, quality and pre-training techniques need to be further investigated to be able to obtain Spanish monolingual models significantly better than the multilingual ones released by large private companies, specially in the face of rapid ongoing progress in the field. The recent activity in the development of language technology for Spanish is to be welcomed, but our results show that building language models remains an open, resource-heavy problem which requires to marry resources (monetary and/or computational) with the best research expertise and practice.
translated by 谷歌翻译
非洲语言仍然滞留在自然语言处理技术的进步中,是缺乏代表性数据的一个原因,具有可以在语言之间传输信息的技术可以帮助减少缺乏数据问题。本文列车Setswana和Sepedi单语法向量,并使用Vecmap为Setsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssswana-sepedi创建交叉语言嵌入式。 Word Embeddings是字向量,其代表单词作为连续浮动数字,其中语义类似的单词映射到N维空间中的附近点。 Word Embeddings的想法是基于分布假设,即在类似上下文中分发了语义类似的单词(Harris,1954)。通过学习两个单独训练的单丝矢量的共享矢量空间来利用单晶嵌入来利用单晶的嵌入,使得具有类似含义的单词由类似的载体表示。在本文中,我们调查Setswana-Sepedi单声道单词矢量的十字旋转嵌入。我们使用Vecmap中的无监督十字形嵌入式培训Setswana-Sepedi跨语言嵌入式。我们使用语义评估任务评估Setswana-Sepedi交叉词表示的质量。对于语义相似性任务,我们将单词和Simlex任务翻译成SetSwana和Sepedi。我们将此数据集发布为其他研究人员的这项工作的一部分。我们评估嵌入式的内在质量,以确定是否有改进单词嵌入的语义表示。
translated by 谷歌翻译
致辞推理是自然语言处理中的关键问题之一,但标记数据的相对稀缺缺少英语以外的语言的进度。预先磨削的交叉模型是一种强大的语言不可知论者的来源,但它们的固有推理能力仍在积极研究。在这项工作中,我们设计了一种简单的方法来推理,将线性分类器列举为具有多针关注的重量。为了评估这种方法,我们通过在标准化管道内的先前工作中处理多种数据集来创建多语言WinoGrad模式语料库,并在样品外性能方面测量交叉语言泛化能力。该方法在近期监督和无人监督推理的最近监督和无监督的方法中表现得很竞争,即使在以零拍摄方式应用于其他语言。此外,我们证明大多数性能由所有研究的语言的相同小的注意头给出,这提供了在多语言编码器中的普遍推理能力的证据。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
Multilingual language models (MLMs) acquire valuable, generalizable linguistic information during pretraining and have advanced the state of the art on task-specific finetuning. So far, only ~ 28 out of ~2,000 African languages are covered in existing language models. We ameliorate this limitation by developing SERENGETI, a set of massively multilingual language model that covers 517 African languages and language varieties. We evaluate our novel models on eight natural language understanding tasks across 20 datasets, comparing to four MLMs that each cover any number of African languages. SERENGETI outperforms other models on 11 datasets across the eights tasks and achieves 82.27 average F-1. We also perform error analysis on our models' performance and show the influence of mutual intelligibility when the models are applied under zero-shot settings. We will publicly release our models for research.
translated by 谷歌翻译
深层语言语言模型(LMS)如Elmo,BERT及其继任者通过预先训练单个模型来迅速缩放自然语言处理的景观,然后是任务特定的微调。此外,像XLM-R和MBERT这样的这种模型的多语言版本使得有希望的零射击交叉传输导致,可能在许多不足和资源不足的语言中实现NLP应用。由于此初步成功,预先接受的模型被用作“通用语言模型”作为不同任务,域和语言的起点。这项工作通过识别通用模型应该能够扩展的七个维度来探讨“普遍性”的概念,即同样良好或相当良好地执行,在不同的环境中有用。我们概述了当前支持这些维度的模型性能的当前理论和经验结果,以及可能有助于解决其当前限制的扩展。通过这项调查,我们为理解大规模上下文语言模型的能力和限制奠定了基础,并帮助辨别研究差距和未来工作的方向,使这些LMS包含多样化和公平的应用,用户和语言现象。
translated by 谷歌翻译
社交媒体数据已成为有关现实世界危机事件的及时信息的有用来源。与将社交媒体用于灾难管理有关的主要任务之一是自动识别与危机相关的消息。关于该主题的大多数研究都集中在特定语言中特定类型事件的数据分析上。这限制了概括现有方法的可能性,因为模型不能直接应用于新类型的事件或其他语言。在这项工作中,我们研究了通过利用跨语言和跨域标记数据来自动对与危机事件相关的消息进行分类的任务。我们的目标是利用来自高资源语言的标记数据来对其他(低资源)语言和/或新(以前看不见的)类型的危机情况进行分类。在我们的研究中,我们从文献中合并了一个大型统一数据集,其中包含多个危机事件和语言。我们的经验发现表明,确实有可能利用英语危机事件的数据来对其他语言(例如西班牙语和意大利语)(80.0%的F1得分)对相同类型的事件进行分类。此外,我们在跨语言环境中为跨域任务(80.0%F1得分)取得了良好的性能。总体而言,我们的工作有助于改善数据稀缺问题,这对于多语言危机分类非常重要。特别是,当时间是本质的时候,可以减轻紧急事件中的冷启动情况。
translated by 谷歌翻译
多语种预训练模型在许多多语言NLP任务中展示了它们的有效性,并使从高资源语言到低资源的零射击或几秒钟传输。然而,由于某种语言之间的显着的类型差异和矛盾,这些模型通常在许多语言和交叉语言设置上表现不佳,这表明了学习单一模型同时处理大规模不同语言的难度。为了减轻这个问题,我们提出了一个新的多语言预训练管道。我们建议从多语言预先训练的模型产生语言表示,并进行语言分析,以表明语言表示相似度反映了从多个角度来看的语言相似度,包括语言家庭,地理蓝星,词汇表演和语法。然后,我们将所有目标语言集成到多个组中,并将每个组名称为表示SprachBund。因此,在同一表示SprachBund中的语言应该在培训和微调中互相提升,因为它们共享丰富的语言相似性。我们预先列车为每个代表斯普拉克班达一个多语言模型。实验在交叉基准上进行,与强基线相比,实现了显着的改进。
translated by 谷歌翻译
多语言预训练的语言模型在跨语言任务上表现出了令人印象深刻的表现。它极大地促进了自然语言处理在低资源语言上的应用。但是,当前的多语言模型仍然有些语言表现不佳。在本文中,我们提出了Cino(中国少数族裔训练的语言模型),这是一种用于中国少数语言的多语言预训练的语言模型。它涵盖了标准的中文,Yue中文和其他六种少数民族语言。为了评估多语言模型在少数族裔语言上的跨语性能力,我们从Wikipedia和新闻网站收集文档,并构建两个文本分类数据集,WCM(Wiki-Chinese-Minority)和CMNEWS(中国最少的新闻)。我们表明,Cino在各种分类任务上的表现明显优于基准。Cino模型和数据集可在http://cino.hfl-rc.com上公开获得。
translated by 谷歌翻译
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
translated by 谷歌翻译
低资源语言(例如波罗的海语言)受益于具有非凡的跨语性转移性能功能的大型多语言模型(LMS)。这项工作是对多语言LMS的跨语性表示的解释和分析研究。先前的作品假设这些LMS内部项目表示不同语言的代表形式为共享的跨语言空间。但是,文献产生了矛盾的结果。在本文中,我们重新审视了先前的工作,声称“ Bert不是Interlingua”,并表明不同的语言确实会在此类语言模型中收敛到共享空间,并具有另一种选择策略或相似性索引。然后,我们对使用378个成对语言比较的两个最受欢迎的多语言LMS进行了跨语性代表性分析。我们发现,虽然大多数语言共享联合跨语言空间,但有些语言却没有。但是,我们观察到波罗的海语言确实属于共享空间。
translated by 谷歌翻译