随着预培训的语言模型变得更加要求资源,因此资源丰富的语言(例如英语和资源筛选)语言之间的不平等正在恶化。这可以归因于以下事实:每种语言中的可用培训数据量都遵循幂律分布,并且大多数语言都属于分布的长尾巴。一些研究领域试图缓解这个问题。例如,在跨语言转移学习和多语言培训中,目标是通过从资源丰富的语言中获得的知识使长尾语言受益。尽管成功,但现有工作主要集中于尝试尽可能多的语言。结果,有针对性的深入分析主要不存在。在这项研究中,我们专注于单一的低资源语言,并使用跨语性培训(XPT)进行广泛的评估和探测实验。为了使转移方案具有挑战性,我们选择韩语作为目标语言,因为它是一种孤立的语言,因此与英语几乎没有类型的分类。结果表明,XPT不仅优于表现或与单语模型相当,该模型训练有大小的数据,而且在传输过程中也很高。
translated by 谷歌翻译
While large pre-trained models have transformed the field of natural language processing (NLP), the high training cost and low cross-lingual availability of such models prevent the new advances from being equally shared by users across all languages, especially the less spoken ones. To promote equal opportunities for all language speakers in NLP research and to reduce energy consumption for sustainability, this study proposes an effective and energy-efficient framework GreenPLM that uses bilingual lexicons to directly translate language models of one language into other languages at (almost) no additional cost. We validate this approach in 18 languages and show that this framework is comparable to, if not better than, other heuristics trained with high cost. In addition, when given a low computational cost (2.5\%), the framework outperforms the original monolingual language models in six out of seven tested languages. We release language models in 50 languages translated from English and the source code here.
translated by 谷歌翻译
最近,大型预用语言模型(LMS)越来越受欢迎。培训这些模型需要更多的计算资源,并且大多数现有模型仅在英文文本上培训。以其他语言训练这些模型非常昂贵。为了缓解这个问题,我们介绍了一种叫做威施塞的方法 - 将英语模型传输到新语言。我们将英语模型的销量与目标语言中的销量交换,并初始化令牌嵌入式,以便通过利用覆盖英语和目标语言的多语言静态字嵌入来初始化令牌嵌入式。我们使用Wechsel将GPT-2和Roberta模型转移到4种其他语言(法语,德语,中文和斯瓦希里语)。 Wechsel通过以前提出的跨语言参数转移和优于比较大小的模型来改善从目标语言的划痕训练的相当大小的型号,距离培训速度较小。我们的方法使培训大型语言模型为新语言更容易访问,更少损害环境。我们宣传我们的代码和型号。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
Pre-trained language models are trained on large-scale unsupervised data, and they can be fine-tuned on small-scale labeled datasets and achieve good results. Multilingual pre-trained language models can be trained on multiple languages and understand multiple languages at the same time. At present, the research on pre-trained models mainly focuses on rich-resource language, while there is relatively little research on low-resource languages such as minority languages, and the public multilingual pre-trained language model can not work well for minority languages. Therefore, this paper constructs a multilingual pre-trained language model named MiLMo that performs better on minority language tasks, including Mongolian, Tibetan, Uyghur, Kazakh and Korean. To solve the problem of scarcity of datasets on minority languages and verify the effectiveness of the MiLMo model, this paper constructs a minority multilingual text classification dataset named MiTC, and trains a word2vec model for each language. By comparing the word2vec model and the pre-trained model in the text classification task, this paper provides an optimal scheme for the downstream task research of minority languages. The final experimental results show that the performance of the pre-trained model is better than that of the word2vec model, and it has achieved the best results in minority multilingual text classification. The multilingual pre-trained language model MiLMo, multilingual word2vec model and multilingual text classification dataset MiTC are published on https://milmo.cmli-nlp.com.
translated by 谷歌翻译
多语言预训练的语言模型(PLM)在高资源和低资源语言的下游任务上表现出令人印象深刻的表现。但是,在预培训期间,尤其是非洲语言中,看不见的语言仍然有很大的表现。适应新语言的最有效方法之一是\ textit {语言自适应微调}(LAFT) - 使用预训练目标对单语言的多语言PLM进行微调。但是,适应目标语言会单独使用大磁盘空间,并限制了由此产生的模型的跨语言转移能力,因为它们已经专门用于单语言。在本文中,我们对17种最重要的非洲语言和其他三种在非洲大陆上广泛使用的高资源语言对17种最具资源的非洲语言进行\ Textit {多语言自适应微调},以鼓励跨语性转移学习。为了进一步专注于多语言PLM,我们从嵌入式层中删除了与MAFT之前的非非洲写作脚本相对应的词汇令牌,从而将模型大小降低了约50%。我们对两个多语言PLM(Afriberta和XLM-R)和三个NLP任务(NER,新闻主题分类和情感分类)的评估表明,我们的方法可以在单个语言上应用LAFT,同时需要较小的磁盘空间。此外,我们表明我们的适应性PLM还提高了参数有效微调方法的零击跨语性转移能力。
translated by 谷歌翻译
这项研究提供了对僧伽罗文本分类的预训练语言模型的性能的首次全面分析。我们测试了一组不同的Sinhala文本分类任务,我们的分析表明,在包括Sinhala(XLM-R,Labse和Laser)的预训练的多语言模型中,XLM-R是迄今为止Sinhala文本的最佳模型分类。我们还预先培训了两种基于罗伯塔的单语僧伽罗模型,它们远远优于僧伽罗的现有预训练的语言模型。我们表明,在微调时,这些预训练的语言模型为僧伽罗文本分类树立了非常强大的基线,并且在标记数据不足以进行微调的情况下非常强大。我们进一步提供了一组建议,用于使用预训练的模型进行Sinhala文本分类。我们还介绍了新的注释数据集,可用于僧伽罗文本分类的未来研究,并公开发布我们的预培训模型。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
交叉思考的预培训使用单晶和双语纯文本语料库取得了巨大的成功。然而,大多数预先训练的模型忽略了多语言知识,这是语言不可知的,但包括丰富的交叉结构对齐。在本文中,我们提出了一种XLM-K,这是一种跨语言模型,其在预训练中结合了多语言知识。xlm-k增强了具有两个知识任务的现有多语言预培训,即屏蔽实体预测任务和对象引入任务。我们评估MLQA,NER和XNLI的XLM-K。实验结果清楚地表明了对现有的多语言语言模型的显着改进。MLQA和NER上的结果展示了知识相关任务中的XLM-K的优越性。XNLI中的成功显示了在XLM-k中获得的更好的交叉翻转性。更重要的是,我们提供了详细的探测分析,以确认我们在培训前方案中捕获的所需知识。
translated by 谷歌翻译
多语言预训练的语言模型在跨语言任务上表现出了令人印象深刻的表现。它极大地促进了自然语言处理在低资源语言上的应用。但是,当前的多语言模型仍然有些语言表现不佳。在本文中,我们提出了Cino(中国少数族裔训练的语言模型),这是一种用于中国少数语言的多语言预训练的语言模型。它涵盖了标准的中文,Yue中文和其他六种少数民族语言。为了评估多语言模型在少数族裔语言上的跨语性能力,我们从Wikipedia和新闻网站收集文档,并构建两个文本分类数据集,WCM(Wiki-Chinese-Minority)和CMNEWS(中国最少的新闻)。我们表明,Cino在各种分类任务上的表现明显优于基准。Cino模型和数据集可在http://cino.hfl-rc.com上公开获得。
translated by 谷歌翻译
Given the impact of language models on the field of Natural Language Processing, a number of Spanish encoder-only masked language models (aka BERTs) have been trained and released. These models were developed either within large projects using very large private corpora or by means of smaller scale academic efforts leveraging freely available data. In this paper we present a comprehensive head-to-head comparison of language models for Spanish with the following results: (i) Previously ignored multilingual models from large companies fare better than monolingual models, substantially changing the evaluation landscape of language models in Spanish; (ii) Results across the monolingual models are not conclusive, with supposedly smaller and inferior models performing competitively. Based on these empirical results, we argue for the need of more research to understand the factors underlying them. In this sense, the effect of corpus size, quality and pre-training techniques need to be further investigated to be able to obtain Spanish monolingual models significantly better than the multilingual ones released by large private companies, specially in the face of rapid ongoing progress in the field. The recent activity in the development of language technology for Spanish is to be welcomed, but our results show that building language models remains an open, resource-heavy problem which requires to marry resources (monetary and/or computational) with the best research expertise and practice.
translated by 谷歌翻译
编码单词语义属性的密集词向量或“Word Embeddings”现在已成为机器翻译(MT),问题应答(QA),字感消解(WSD)和信息检索(IR)中的NLP任务的积分。在本文中,我们使用各种现有方法为14个印度语言创建多个单词嵌入。我们将这些嵌入的嵌入式为所有这些语言,萨姆萨姆,孟加拉,古吉拉蒂,印地教派,kannada,konkani,malayalam,marathi,尼泊尔,odiya,punjabi,梵语,泰米尔和泰雅古士在一个单一的存储库中。相对较新的方法,强调迎合上下文(BERT,ELMO等),表明了显着的改进,但需要大量资源来产生可用模型。我们释放使用上下文和非上下文方法生成的预训练嵌入。我们还使用Muse和XLM来培训所有上述语言的交叉语言嵌入。为了展示我们嵌入的效果,我们为所有这些语言评估了我们对XPOS,UPOS和NER任务的嵌入模型。我们使用8种不同的方法释放了436个型号。我们希望他们对资源受限的印度语言NLP有用。本文的标题是指最初在1924年出版的福斯特的着名小说“一段是印度”。
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
对于许多任务,基于变压器的体系结构已经实现了最新的结果,从而导致实践从使用特定于任务的架构到预先训练的语言模型的微调。持续的趋势包括具有越来越多的数据和参数的培训模型,这需要大量资源。它导致了强有力的搜索,以提高基于仅针对英语评估的算法和硬件改进的算法和硬件改进。这引发了有关其可用性的疑问,当应用于小规模的学习问题时,对于资源不足的语言任务,有限的培训数据可用。缺乏适当尺寸的语料库是应用数据驱动和转移学习的方法的障碍。在本文中,我们建立了致力于基于变压器模型的可用性的最新努力,并建议评估这些改进的法语表现,而法语的效果很少。我们通过通过数据增强,超参数优化和跨语性转移来调查各种培训策略来解决与数据稀缺有关的不稳定。我们还为法国弗拉伯特(Fralbert)引入了一种新的紧凑型模型,该模型在低资源环境中被证明具有竞争力。
translated by 谷歌翻译
本文提出了一种新的预先接受训练的语言模型Debertav3,它通过用更换的令牌检测(RTD)更换掩模语言建模(MLM)来改善原始的Deberta模型,更高的预训练任务。我们的分析表明,Vanilla嵌入了电力中的共享损害培训效率和模型性能。这是因为鉴别器的培训损失和发电机的销售损失在不同的方向上拉动令牌嵌入,从而创造“拔河”动态。因此,我们提出了一种新的梯度 - 解开嵌入共享方法,避免了战争动态,提高了训练效率和预训练模型的质量。我们使用与Deberta相同的设置预先接受了培训的Debertav3,以展示其在广泛的下游自然语言理解(NLU)任务上的特殊表现。以八个任务为例,Debertav3大型模型以八个任务为例,平均得分为91.37%,杜伯塔省的1.37%和电力1.91%,在模型中设置新的最先进(SOTA)具有类似的结构。此外,我们预先培训了多语思伯类Mdeberta,与英语模型相比,对强基线的更大改善。例如,Mdeberta基地达到XNLI的79.8%零射频精度和超过XLM-R基础的3.6%的改进,在此基准上创建了一个新的Sota。我们在HTTPS://github.com/microsoft/deberta公开提供我们预先接受的模型和推理码。
translated by 谷歌翻译
非洲语言仍然滞留在自然语言处理技术的进步中,是缺乏代表性数据的一个原因,具有可以在语言之间传输信息的技术可以帮助减少缺乏数据问题。本文列车Setswana和Sepedi单语法向量,并使用Vecmap为Setsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssswana-sepedi创建交叉语言嵌入式。 Word Embeddings是字向量,其代表单词作为连续浮动数字,其中语义类似的单词映射到N维空间中的附近点。 Word Embeddings的想法是基于分布假设,即在类似上下文中分发了语义类似的单词(Harris,1954)。通过学习两个单独训练的单丝矢量的共享矢量空间来利用单晶嵌入来利用单晶的嵌入,使得具有类似含义的单词由类似的载体表示。在本文中,我们调查Setswana-Sepedi单声道单词矢量的十字旋转嵌入。我们使用Vecmap中的无监督十字形嵌入式培训Setswana-Sepedi跨语言嵌入式。我们使用语义评估任务评估Setswana-Sepedi交叉词表示的质量。对于语义相似性任务,我们将单词和Simlex任务翻译成SetSwana和Sepedi。我们将此数据集发布为其他研究人员的这项工作的一部分。我们评估嵌入式的内在质量,以确定是否有改进单词嵌入的语义表示。
translated by 谷歌翻译
对于自然语言处理应用可能是有问题的,因为它们的含义不能从其构成词语推断出来。缺乏成功的方法方法和足够大的数据集防止了用于检测成语的机器学习方法的开发,特别是对于在训练集中不发生的表达式。我们提出了一种叫做小鼠的方法,它使用上下文嵌入来实现此目的。我们展示了一个新的多字表达式数据集,具有文字和惯用含义,并使用它根据两个最先进的上下文单词嵌入式培训分类器:Elmo和Bert。我们表明,使用两个嵌入式的深度神经网络比现有方法更好地执行,并且能够检测惯用词使用,即使对于训练集中不存在的表达式。我们展示了开发模型的交叉传输,并分析了所需数据集的大小。
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
一些基于变压器的模型可以执行跨语言转移学习:这些模型可以通过一种语言对特定任务进行培训,并以另一种语言的同一任务给予相对良好的结果,尽管仅在单语任务中进行了预先培训。但是,关于这些基于变压器的模型是否学习跨语言的通用模式,目前尚无共识。我们提出了一种单词级的任务不可能的方法,以评估此类模型构建的上下文化表示的对齐方式。我们表明,与以前的方法相比,我们的方法提供了更准确的翻译成对,以评估单词级别对齐。我们的结果表明,基于多语言变压器模型的某些内部层优于其他明确对齐的表示,甚至根据多语言对齐的更严格的定义,更是如此。
translated by 谷歌翻译
The BLOOM model is a large open-source multilingual language model capable of zero-shot learning, but its pretraining was limited to 46 languages. To improve its zero-shot performance on unseen languages, it is desirable to adapt BLOOM, but previous works have only explored adapting small language models. In this work, we apply existing language adaptation strategies to BLOOM and benchmark its zero-shot prompting performance on eight new languages. We find language adaptation to be effective at improving zero-shot performance in new languages. Surprisingly, adapter-based finetuning is more effective than continued pretraining for large models. In addition, we discover that prompting performance is not significantly affected by language specifics, such as the writing system. It is primarily determined by the size of the language adaptation data. We also add new languages to BLOOMZ, which is a multitask finetuned version of BLOOM capable of following task instructions zero-shot. We find including a new language in the multitask fine-tuning mixture to be the most effective method to teach BLOOMZ a new language. We conclude that with sufficient training data language adaptation can generalize well to diverse languages. Our code is available at \url{https://github.com/bigscience-workshop/multilingual-modeling/}.
translated by 谷歌翻译