将语义解析器定位以支持新语言需要有效的跨语性概括。最近的工作发现了机器翻译或零击方法的成功,尽管这些方法可能难以模拟母语人士如何提出问题。我们考虑如何有效利用新语言的最小注释示例来进行几次跨语性语义解析。我们引入了一阶元学习算法,以在跨语性转移过程中训练具有最大样品效率的语义解析器。我们的算法使用高资源语言来训练解析器,并同时优化低资源语言的跨语性概括。 ATIS上六种语言的结果表明,我们的泛化步骤的组合产生了准确的语义解析器,以每种新语言中的源培训数据$ 10%的$ 10%。我们的方法还使用英语对蜘蛛的竞争模型进行训练,并将其推广到中文,同样对$ 10%的培训数据进行了采样。
translated by 谷歌翻译
与辅助语言的元学习已经表明了对交叉语言自然语言处理的有希望的改进。然而,以前的研究采样使用相同语言的元培训和元测试数据,这限制了模型交叉传输的能力。在本文中,我们提出了XLA-MAML,在元学习阶段执行直接交叉调整。我们对自然语言推理和问题进行零射击和几次拍摄实验。实验结果表明了我们在不同语言,任务和预磨料模型中的方法的有效性。我们还对元学习的各种交叉特定设置进行了分析,包括采样策略和并行性。
translated by 谷歌翻译
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark 1 to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
translated by 谷歌翻译
我们提出了一种两阶段的培训方法,用于开发单个NMT模型,以翻译英语和英语的看不见的语言。对于第一阶段,我们将编码器模型初始化以鉴定XLM-R和Roberta的权重,然后对25种语言的平行数据进行多种语言微调。我们发现该模型可以推广到对看不见的语言的零击翻译。在第二阶段,我们利用这种概括能力从单语数据集生成合成的并行数据,然后用连续的反向翻译训练。最终模型扩展到了英语到许多方向,同时保持了多到英语的性能。我们称我们的方法为ecxtra(以英语为中心的跨语言(x)转移)。我们的方法依次利用辅助并行数据和单语言数据,并且在概念上很简单,仅在两个阶段都使用标准的跨熵目标。最终的ECXTRA模型对8种低资源语言的无监督NMT进行了评估,该语言为英语至哈萨克语(22.3> 10.4 bleu)以及其他15个翻译方向的竞争性能而获得了新的最先进。
translated by 谷歌翻译
有监督的基于深度学习的方法已应用于以任务为导向的对话框,并在有足够数量的培训示例可用时对有限的域和语言应用有效。在实践中,这些方法遭受了域驱动设计和资源不足的语言的缺点。域和语言模型应该随着问题空间的发展而增长和变化。一方面,对转移学习的研究证明了基于多语言变压器模型学习语义丰富的表示的跨语性能力。另一方面,除了上述方法之外,元学习还能够开发任务和语言学习算法,能够实现泛滥。在这种情况下,本文提出了使用典型的神经网络和基于多语言变压器的模型来研究使用协同进行几次学习的跨语性可传递性。自然语言的实验理解多亚提斯++语料库的任务表明,我们的方法基本上改善了低资源和高资源语言之间观察到的转移学习表现。更普遍地说,我们的方法证实,可以将具有特定语言的有意义的潜在空间推广到使用元学习的情况下看不见和资源不足的潜在空间。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
Universal cross-lingual sentence embeddings map semantically similar cross-lingual sentences into a shared embedding space. Aligning cross-lingual sentence embeddings usually requires supervised cross-lingual parallel sentences. In this work, we propose mSimCSE, which extends SimCSE to multilingual settings and reveal that contrastive learning on English data can surprisingly learn high-quality universal cross-lingual sentence embeddings without any parallel data. In unsupervised and weakly supervised settings, mSimCSE significantly improves previous sentence embedding methods on cross-lingual retrieval and multilingual STS tasks. The performance of unsupervised mSimCSE is comparable to fully supervised methods in retrieving low-resource languages and multilingual STS. The performance can be further enhanced when cross-lingual NLI data is available. Our code is publicly available at https://github.com/yaushian/mSimCSE.
translated by 谷歌翻译
Pre-trained multilingual language models show significant performance gains for zero-shot cross-lingual model transfer on a wide range of natural language understanding (NLU) tasks. Previously, for zero-shot cross-lingual evaluation, pre-trained models are only fine-tuned on English data and tested on a variety of target languages. In this paper, we do cross-lingual evaluation on various NLU tasks (sentence classification, sequence labeling, question answering) using prompt-tuning and compare it with fine-tuning. The results show that prompt tuning achieves much better cross-lingual transfer than fine-tuning across datasets, with only 0.1% to 0.3% tuned parameters. Additionally, we demonstrate through the analysis that prompt tuning can have better cross-lingual transferability of representations on downstream tasks with better aligned decision boundaries.
translated by 谷歌翻译
可靠的评估基准是为了可复制性和全面性而设计的,在机器学习方面取得了进步。但是,由于缺乏多语言基准,视觉和语言研究主要集中在英语任务上。为了填补这一空白,我们介绍了图像的语言理解评估基准。 Iglue通过汇总已有的数据集并创建新的数据来汇集 - 视觉问题回答,跨模式检索,扎根的推理以及跨20种不同语言的扎根成本。我们的基准测试能够评估多语言多模型用于转移学习的模型,不仅在零弹位设置中,而且还以新定义的少数图学习设置。根据对可用最新模型的评估,我们发现翻译测试转移优于零弹性转移,并且对于许多任务而言,很难利用射击的学习。此外,下游性能部分用可用的未标记文本数据进行预处理来解释,并且仅通过目标源语言的类型学距离而微弱。我们希望通过向社区释放基准来鼓励该领域的未来研究工作。
translated by 谷歌翻译
以前的工作主要侧重于改善NLU任务的交叉传输,具有多语言预用编码器(MPE),或提高与伯特的监督机器翻译的性能。然而,探索了,MPE是否可以有助于促进NMT模型的交叉传递性。在本文中,我们专注于NMT中的零射频转移任务。在此任务中,NMT模型培训,只有一个语言对的并行数据集和搁置架MPE,然后它直接测试在零拍语言对上。我们为此任务提出了Sixt,一个简单而有效的模型。 SIXT利用了两阶段培训计划利用MPE,并进一步改进了解离编码器和容量增强的解码器。使用此方法,SIMPT显着优于MBart,这是一个用于NMT的预磨削的多语言编码器解码器模型,平均改善了14个源语言的零拍摄的任何英语测试集上的7.1 BLEU。此外,培训计算成本和培训数据较少,我们的模型在15个任何英语测试组上实现了比Criss和M2M-100,两个强大的多语言NMT基线更好的性能。
translated by 谷歌翻译
Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. Recently, a few efforts have utilized a simple mark-then-translate method to jointly perform translation and projection by inserting special markers around the labeled spans in the original sentence. However, as far as we are aware, no empirical analysis has been conducted on how this approach compares to traditional annotation projection based on word alignment. In this paper, we present an extensive empirical study across 42 languages and three tasks (QA, NER, and Event Extraction) to evaluate the effectiveness and limitations of both methods, filling an important gap in the literature. Experimental results show that our optimized version of mark-then-translate, which we call EasyProject, is easily applied to many languages and works surprisingly well, outperforming the more complex word alignment-based methods. We analyze several key factors that affect end-task performance, and show EasyProject works well because it can accurately preserve label span boundaries after translation. We will publicly release all our code and data.
translated by 谷歌翻译
一种有效的横向传输方法是在一种语言中微调在监督数据集上的双语或多语言模型,并以零拍方式在另一种语言上进行评估。在培训时间或推理时间翻译例子也是可行的替代方案。然而,存在与文献中很少有关的这些方法相关的成本。在这项工作中,我们在其有效性(例如,准确性),开发和部署成本方面分析交叉语言方法,以及推理时间的延迟。我们的三个任务的实验表明最好的交叉方法是高度任务依赖性的。最后,通过结合零射和翻译方法,我们在这项工作中使用的三个数据集中实现了最先进的。基于这些结果,我们对目标语言手动标记的培训数据有所了解。代码和翻译的数据集可在https://github.com/unicamp-dl/cross-lingsual-analysis上获得
translated by 谷歌翻译
我们考虑使用最新的MultieRlex数据集中考虑法律主题分类中的零射击跨语性转移。由于原始数据集包含并行文档,这对于零拍传输不现实是不现实的,因此我们开发了一个没有并行文档的数据集的新版本。我们使用它来表明,基于翻译的方法非常优于多绘制预训练的模型,这是多曲线的最佳先前的零弹性传输方法。我们还开发了一种双语的教师零摄像转移方法,该方法利用了目标语言的其他未标记文档,并且比直接在标记的目标语言文档上进行微调的模型更好。
translated by 谷歌翻译
交叉语言语音适应旨在解决利用多种丰富资源语言来构建低资源目标语言的模型的问题。由于低资源语言具有有限的培训数据,语音识别模型可以容易地过度装备。在本文中,我们建议使用适配器来研究多种适配器的性能,用于参数有效的交叉语音语音适应。基于我们以前的MetaAdapter,隐含地利用适配器,我们提出了一种名为SimAdapter的新算法,用于从Adapters明确学习知识。我们的算法利用了可以轻松集成到变压器结构中的适配器.METAADAPTER利用元学习将一般知识从训练数据转移到测试语言。 SimAdapter旨在使用适配器微调期间了解源语言与目标语言之间的相似性。我们在公共语音数据集中对五种低资源语言进行广泛的实验。结果表明,与强大的全型微调基线相比,我们的MetaAdapter和SimAdapter方法可以将WER减小2.98%和2.55%,只有2.5%和15.5%的培训参数。此外,我们还表明这两种新型算法可以集成,以便更好的性能,相对减少高达3.55%。
translated by 谷歌翻译
Modern virtual assistants use internal semantic parsing engines to convert user utterances to actionable commands. However, prior work has demonstrated that semantic parsing is a difficult multilingual transfer task with low transfer efficiency compared to other tasks. In global markets such as India and Latin America, this is a critical issue as switching between languages is prevalent for bilingual users. In this work we dramatically improve the zero-shot performance of a multilingual and codeswitched semantic parsing system using two stages of multilingual alignment. First, we show that constrastive alignment pretraining improves both English performance and transfer efficiency. We then introduce a constrained optimization approach for hyperparameter-free adversarial alignment during finetuning. Our Doubly Aligned Multilingual Parser (DAMP) improves mBERT transfer performance by 3x, 6x, and 81x on the Spanglish, Hinglish and Multilingual Task Oriented Parsing benchmarks respectively and outperforms XLM-R and mT5-Large using 3.2x fewer parameters.
translated by 谷歌翻译
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
translated by 谷歌翻译
Task-oriented dialogue (TOD) systems have been applied in a range of domains to support human users to achieve specific goals. Systems are typically constructed for a single domain or language and do not generalise well beyond this. Their extension to other languages in particular is restricted by the lack of available training data for many of the world's languages. To support work on Natural Language Understanding (NLU) in TOD across multiple languages and domains simultaneously, we constructed MULTI3NLU++, a multilingual, multi-intent, multi-domain dataset. MULTI3NLU++ extends the English-only NLU++ dataset to include manual translations into a range of high, medium and low resource languages (Spanish, Marathi, Turkish and Amharic), in two domains (banking and hotels). MULTI3NLU++ inherits the multi-intent property of NLU++, where an utterance may be labelled with multiple intents, providing a more realistic representation of a user's goals and aligning with the more complex tasks that commercial systems aim to model. We use MULTI3NLU++ to benchmark state-of-the-art multilingual language models as well as Machine Translation and Question Answering systems for the NLU task of intent detection for TOD systems in the multilingual setting. The results demonstrate the challenging nature of the dataset, particularly in the low-resource language setting.
translated by 谷歌翻译
本文介绍了基于Wav2VEC 2.0的跨语言语音表示学习的大规模模型。我们在128种语言中培训最多2B个公共讲话音频的近半小时的型号的模型,比公共数据的数量级比最大的已知事先工作。我们的评估涵盖了广泛的任务,域,数据制度和语言,都是高低资源。在Covost-2语音翻译基准测试中,我们将先前的最先进的状态平均为7.4 BLEU超过21个翻译方向进入英语。对于语音识别,XLS-R在Babel,MLS,CommonVoice以及Voxpopuli上的最佳已知工作中提高,降低了相对的误差率14-34%。 XLS-R还在Voxlingua107语言识别上设置了新的技术状态。此外,我们表明,具有足够的模型规模,交叉思维预先预测可以在将英语演讲翻译成其他语言时才能优于英语撇印,这是一个有利于单晶的预借预制的设置。我们希望XLS-R可以帮助改善世界上更多语言的语音处理任务。
translated by 谷歌翻译
GPT-3等大型自回归语言模型是几秒钟的学习者,可以在没有微调的情况下执行各种语言任务。虽然已知这些模型能够共同代表许多不同的语言,但他们的培训数据由英语主导,可能限制了它们的交叉概括。在这项工作中,我们在覆盖多种语言的平衡语料库上培训多语言自回归语言模型,并在广泛的任务中研究他们几乎没有零点的学习能力。我们最大的模型,具有75亿参数,在20多种代表语言中,在几种代表语言中,在几种代表性语言中,在几种代表性语言中,在多语言型号推理中表现出可比大小的GPT-3(在0次设置和0次拍摄设置中的绝对精度改善+ 7.4% 4-拍摄设置中的9.4%)和自然语言推理(每次拍摄和4次设置中的每一个+ 5.4%)。在Flores-101机器翻译基准测试中,我们的模型优于GPT-3在182个翻译方向上有32个培训例子,同时超过45个方向的官方监督基线。我们介绍了模型成功和失败的位置的详细分析,特别是它尤其显示在某些任务中实现交叉语境的内容学习,而仍然存在改善表面的鲁棒性和适应没有a的任务的余地自然冻结形式。最后,我们评估我们在仇恨语音检测中以五种语言的仇恨语音检测的模型,并发现它具有与可比大小的GPT-3模型类似的限制。
translated by 谷歌翻译
虽然审慎的语言模型(PLM)主要用作通用文本编码器,可以对各种下游任务进行微调,但最近的工作表明它们也可以重新连接以产生高质量的单词表示(即静态单词)嵌入)并在类型级词汇任务中产生良好的性能。虽然现有的工作主要集中在单语和双语环境中PLM的词汇专业化,但在这项工作中,我们将大规模多语言变压器(例如MMTS,例如Mbert或XLM-R)公开,以此为大规模的多语言词法知识,并利用Babelnet作为易于获得的丰富来源。多语言和跨语性类型级词汇知识。具体来说,我们利用Babelnet的多语言合成器来创建$ 50 $语言的同义词对,然后对MMTS(Mbert和XLM-R)进行对比目标指导的词汇专业化程序。我们表明,如此庞大的多语言词汇专业化为两项标准的跨语性词汇任务,双语词典感应和跨语性单词相似性以及跨语性句子检索带来了巨大的收益。至关重要的是,我们观察到在专业化中看不见的语言的收益,表明多语言词汇专业化使得概括无词法约束。在一系列随后的受控实验中,我们证明了MMT对专业化语言中单词表示的预处理质量对性能的影响要比一组约束集的语言多样性更大。令人鼓舞的是,这表明涉及低资源语言的词汇任务从资源丰富的语言的词汇知识中受益最大,通常更多。
translated by 谷歌翻译