Due to their crucial role in all NLP, several benchmarks have been proposed to evaluate pretrained language models. In spite of these efforts, no public benchmark of diverse nature currently exists for evaluation of Arabic. This makes it challenging to measure progress for both Arabic and multilingual language models. This challenge is compounded by the fact that any benchmark targeting Arabic needs to take into account the fact that Arabic is not a single language but rather a collection of languages and varieties. In this work, we introduce ORCA, a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models. We also provide a public leaderboard with a unified single-number evaluation metric (ORCA score) to facilitate future research.
translated by 谷歌翻译
预先训练的上下文化文本表示模型学习自然语言的有效表示,以使IT机器可以理解。在注意机制的突破之后,已经提出了新一代预磨模的模型,以便自变压器引入以来实现了良好的性能。来自变压器(BERT)的双向编码器表示已成为语言理解的最先进的模型。尽管取得了成功,但大多数可用的型号已经在印度欧洲语言中培训,但是对代表性的语言和方言的类似研究仍然稀疏。在本文中,我们调查了培训基于单语言变换器的语言模型的可行性,以获得代表语言的特定重点是突尼斯方言。我们评估了我们的语言模型对情感分析任务,方言识别任务和阅读理解问答任务。我们表明使用嘈杂的Web爬网数据而不是结构化数据(维基百科,文章等)更方便这些非标准化语言。此外,结果表明,相对小的Web爬网数据集导致与使用较大数据集获得的那些表现相同的性能。最后,我们在所有三个下游任务中达到或改善了最先进的Tunbert模型。我们释放出Tunbert净化模型和用于微调的数据集。
translated by 谷歌翻译
Task agnostic generative pretraining (GPT) has recently proved promising for zero- and few-shot learning, gradually diverting attention from the expensive supervised learning paradigm. Although the community is accumulating knowledge as to capabilities of English-language autoregressive models such as GPT-3 adopting this generative approach, scholarship about these models remains acutely Anglocentric. Consequently, the community currently has serious gaps in its understanding of this class of models, their potential, and their societal impacts in diverse settings, linguistic traditions, and cultures. To alleviate this issue for Arabic, a collection of diverse languages and language varieties with more than $400$ million population, we introduce JASMINE, a suite of powerful Arabic autoregressive Transformer language models ranging in size between 300 million-13 billion parameters. We pretrain our new models with large amounts of diverse data (400GB of text) from different Arabic varieties and domains. We evaluate JASMINE extensively in both intrinsic and extrinsic settings, using a comprehensive benchmark for zero- and few-shot learning across a wide range of NLP tasks. We also carefully develop and release a novel benchmark for both automated and human evaluation of Arabic autoregressive models focused at investigating potential social biases, harms, and toxicity in these models. We aim to responsibly release our models with interested researchers, along with code for experimenting with them
translated by 谷歌翻译
特定于语言的预训练模型已被证明比单语说在单语法评估设置中更准确,阿拉伯语也不例外。但是,我们发现先前发布的阿拉伯伯特模型显着培训。在这本技术报告中,我们展示了Jaber,Junior Arabic Bert,我们的预用语言模型原型专用于阿拉伯语。我们进行实证研究,以系统地评估模型在各种现有阿拉伯语NLU任务中的性能。实验结果表明,Jaber实现了Alue的最先进的表演,这是阿拉伯语了解评估的新基准,以及成熟的内部基准
translated by 谷歌翻译
Multilingual language models (MLMs) acquire valuable, generalizable linguistic information during pretraining and have advanced the state of the art on task-specific finetuning. So far, only ~ 28 out of ~2,000 African languages are covered in existing language models. We ameliorate this limitation by developing SERENGETI, a set of massively multilingual language model that covers 517 African languages and language varieties. We evaluate our novel models on eight natural language understanding tasks across 20 datasets, comparing to four MLMs that each cover any number of African languages. SERENGETI outperforms other models on 11 datasets across the eights tasks and achieves 82.27 average F-1. We also perform error analysis on our models' performance and show the influence of mutual intelligibility when the models are applied under zero-shot settings. We will publicly release our models for research.
translated by 谷歌翻译
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark 1 to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
translated by 谷歌翻译
在本文中,我们介绍了TweetNLP,这是社交媒体中自然语言处理(NLP)的集成平台。TweetNLP支持一套多样化的NLP任务,包括诸如情感分析和命名实体识别的通用重点领域,以及社交媒体特定的任务,例如表情符号预测和进攻性语言识别。特定于任务的系统由专门用于社交媒体文本的合理大小的基于变压器的语言模型(尤其是Twitter)提供动力,无需专用硬件或云服务即可运行。TweetNLP的主要贡献是:(1)使用适合社会领域的各种特定于任务的模型,用于支持社交媒体分析的现代工具包的集成python库;(2)使用我们的模型进行无编码实验的交互式在线演示;(3)涵盖各种典型社交媒体应用的教程。
translated by 谷歌翻译
Language identification (LID) is a crucial precursor for NLP, especially for mining web data. Problematically, most of the world's 7000+ languages today are not covered by LID technologies. We address this pressing issue for Africa by introducing AfroLID, a neural LID toolkit for $517$ African languages and varieties. AfroLID exploits a multi-domain web dataset manually curated from across 14 language families utilizing five orthographic systems. When evaluated on our blind Test set, AfroLID achieves 95.89 F_1-score. We also compare AfroLID to five existing LID tools that each cover a small number of African languages, finding it to outperform them on most languages. We further show the utility of AfroLID in the wild by testing it on the acutely under-served Twitter domain. Finally, we offer a number of controlled case studies and perform a linguistically-motivated error analysis that allow us to both showcase AfroLID's powerful capabilities and limitations.
translated by 谷歌翻译
We present NusaCrowd, a collaborative initiative to collect and unite existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have has brought together 137 datasets and 117 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their effectiveness has been demonstrated in multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and its local languages. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and its local languages. Our work is intended to help advance natural language processing research in under-represented languages.
translated by 谷歌翻译
GPT-3等大型自回归语言模型是几秒钟的学习者,可以在没有微调的情况下执行各种语言任务。虽然已知这些模型能够共同代表许多不同的语言,但他们的培训数据由英语主导,可能限制了它们的交叉概括。在这项工作中,我们在覆盖多种语言的平衡语料库上培训多语言自回归语言模型,并在广泛的任务中研究他们几乎没有零点的学习能力。我们最大的模型,具有75亿参数,在20多种代表语言中,在几种代表语言中,在几种代表性语言中,在几种代表性语言中,在多语言型号推理中表现出可比大小的GPT-3(在0次设置和0次拍摄设置中的绝对精度改善+ 7.4% 4-拍摄设置中的9.4%)和自然语言推理(每次拍摄和4次设置中的每一个+ 5.4%)。在Flores-101机器翻译基准测试中,我们的模型优于GPT-3在182个翻译方向上有32个培训例子,同时超过45个方向的官方监督基线。我们介绍了模型成功和失败的位置的详细分析,特别是它尤其显示在某些任务中实现交叉语境的内容学习,而仍然存在改善表面的鲁棒性和适应没有a的任务的余地自然冻结形式。最后,我们评估我们在仇恨语音检测中以五种语言的仇恨语音检测的模型,并发现它具有与可比大小的GPT-3模型类似的限制。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
由于BERT出现,变压器语言模型和转移学习已成为自然语言理解任务的最先进。最近,一些作品适用于特定领域的预训练,专制模型,例如科学论文,医疗文件等。在这项工作中,我们呈现RoberTuito,用于西班牙语中的用户生成内容的预先训练的语言模型。我们在西班牙语中培训了罗伯特托5亿推文。关于涉及用户生成文本的4个任务的基准测试显示,罗伯特托多于西班牙语的其他预先接受的语言模型。为了帮助进一步研究,我们将罗伯特多公开可在HuggingFace Model Hub上提供。
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
句子嵌入通常用于文本聚类和语义检索任务中。最先进的句子表示方法基于大量手动标记句子对集合的人工神经网络。高资源语言(例如英语或中文)可以使用足够数量的注释数据。在不太受欢迎的语言中,必须使用多语言模型,从而提供较低的性能。在本出版物中,我们通过提出一种培训有效的语言特定句子编码的方法来解决此问题,而无需手动标记数据。我们的方法是从句子对准双语文本语料库中自动构建释义对数据集。然后,我们使用收集的数据来微调具有附加复发池层的变压器语言模型。我们的句子编码器可以在不到一天的时间内在一张图形卡上进行培训,从而在各种句子级的任务上实现高性能。我们在波兰语中评估了八个语言任务的方法,并将其与最佳可用多语言句子编码器进行比较。
translated by 谷歌翻译
语言模型是通过有限的输入集定义的,当我们尝试扩展支持语言的数量时,该输入会产生词汇瓶颈。解决此瓶颈会导致在嵌入矩阵中可以表示的与输出层中的计算问题之间的权衡。本文介绍了基于像素的语言编码器Pixel,这两个问题都没有遭受这些问题的影响。 Pixel是一种验证的语言模型,可将文本作为图像呈现,使基于拼字法相似性或像素的共激活的语言传输表示形式。 Pixel经过训练可以重建蒙版贴片的像素,而不是预测令牌上的分布。我们在与BERT相同的英语数据上为8600万参数像素模型预告,并对包括各种非拉丁语脚本在内的类型上多样化的语言中的句法和语义任务进行了评估。我们发现,Pixel在预读取数据中找不到的脚本上的句法和语义处理任务大大优于BERT,但是在使用拉丁文脚本时,Pixel比BERT稍弱。此外,我们发现像素对嘈杂的文本输入比bert更强大,进一步证实了用像素建模语言的好处。
translated by 谷歌翻译
BERT,ROBERTA或GPT-3等复杂的基于注意力的语言模型的外观已允许在许多场景中解决高度复杂的任务。但是,当应用于特定域时,这些模型会遇到相当大的困难。诸如Twitter之类的社交网络就是这种情况,Twitter是一种不断变化的信息流,以非正式和复杂的语言编写的信息流,鉴于人类的重要作用,每个信息都需要仔细评估,即使人类也需要理解。通过自然语言处理解决该领域的任务涉及严重的挑战。当将强大的最先进的多语言模型应用于这种情况下,特定语言的细微差别用来迷失翻译。为了面对这些挑战,我们提出了\ textbf {bertuit},这是迄今为止针对西班牙语提出的较大变压器,使用Roberta Optimization进行了230m西班牙推文的大规模数据集进行了预培训。我们的动机是提供一个强大的资源,以更好地了解西班牙Twitter,并用于专注于该社交网络的应用程序,特别强调致力于解决该平台中错误信息传播的解决方案。对Bertuit进行了多个任务评估,并与M-Bert,XLM-Roberta和XLM-T进行了比较,该任务非常具有竞争性的多语言变压器。在这种情况下,使用应用程序显示了我们方法的实用性:一种可视化骗局和分析作者群体传播虚假信息的零击方法。错误的信息在英语以外的其他语言等平台上疯狂地传播,这意味着在英语说话之外转移时,变形金刚的性能可能会受到影响。
translated by 谷歌翻译
Pre-trained transformers are now the de facto models in Natural Language Processing given their state-of-the-art results in many tasks and languages. However, most of the current models have been trained on languages for which large text resources are already available (such as English, French, Arabic, etc.). Therefore, there are still a number of low-resource languages that need more attention from the community. In this paper, we study the Algerian dialect which has several specificities that make the use of Arabic or multilingual models inappropriate. To address this issue, we collected more than one million Algerian tweets, and pre-trained the first Algerian language model: DziriBERT. When compared with existing models, DziriBERT achieves better results, especially when dealing with the Roman script. The obtained results show that pre-training a dedicated model on a small dataset (150 MB) can outperform existing models that have been trained on much more data (hundreds of GB). Finally, our model is publicly available to the community.
translated by 谷歌翻译
姿态检测的目标是确定以目标朝向目标的文本中表达的视点。这些观点或上下文通常以许多不同的语言表达,这取决于用户和平台,这可以是本地新闻插座,社交媒体平台,新闻论坛等。然而,姿态检测的大多数研究已经限于使用单一语言和几个有限的目标,在交叉舌姿态检测很少有效。此外,标记数据的非英语来源通常稀缺,并具有额外的挑战。最近,大型多语言语言模型在许多非英语任务上大大提高了性能,尤其是具有有限数量的示例。这突出了模型预培训的重要性及其从少数例子中学习的能力。在本文中,我们展示了对日期交叉姿态检测的最全面的研究:我们在6名语言系列中使用12种语言的12种不同的数据集进行实验,每个都有6个低资源评估设置。对于我们的实验,我们构建了模式开发培训,提出了添加一种新颖的标签编码器来简化言语程序。我们进一步提出了基于情绪的姿态数据进行预培训,这在与几个强的基线相比,在低拍摄环境中显示了大量的6%F1绝对的增长。
translated by 谷歌翻译
本文介绍了对土耳其语可用于的语料库和词汇资源的全面调查。我们审查了广泛的资源,重点关注公开可用的资源。除了提供有关可用语言资源的信息外,我们还提供了一组建议,并确定可用于在土耳其语言学和自然语言处理中进行研究和建筑应用的数据中的差距。
translated by 谷歌翻译