开发用于印度语言的命名实体识别(NER)系统一直是一个长期存在的挑战,主要是由于需要大量注释的清洁培训实例。本文通过利用英语和印度语言的并行语言和英语网数据集,为低资源设置中为印度语言提供了端到端框架。所提出的框架包括注释投影方法,其将单词对准分数和Ner标签预测置信度分数组合在源语言(英语)数据上,以在目标印度语言中生成弱标记的数据。我们使用教师学生模型的变体,并在教师模型的伪标签上共同优化它,并对生成的弱标记数据进行预测。我们还为三种印度语言提出了手动注释的测试集:Hindi,Bengali和Gujarati。我们评估了三种印度语言的测试组拟议框架的表现。与所有语言的零射击转移学习模型相比,实证结果显示最低10%的性能改进。这表明使用目标印度语言中所提出的注释投影方法生成的弱标记数据可以补充注释的源语言数据来提高性能。我们的代码在HTTPS://github.com/aksh555/cl-ner中公开提供
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
Zero-shot cross-lingual named entity recognition (NER) aims at transferring knowledge from annotated and rich-resource data in source languages to unlabeled and lean-resource data in target languages. Existing mainstream methods based on the teacher-student distillation framework ignore the rich and complementary information lying in the intermediate layers of pre-trained language models, and domain-invariant information is easily lost during transfer. In this study, a mixture of short-channel distillers (MSD) method is proposed to fully interact the rich hierarchical information in the teacher model and to transfer knowledge to the student model sufficiently and efficiently. Concretely, a multi-channel distillation framework is designed for sufficient information transfer by aggregating multiple distillers as a mixture. Besides, an unsupervised method adopting parallel domain adaptation is proposed to shorten the channels between the teacher and student models to preserve domain-invariant features. Experiments on four datasets across nine languages demonstrate that the proposed method achieves new state-of-the-art performance on zero-shot cross-lingual NER and shows great generalization and compatibility across languages and fields.
translated by 谷歌翻译
We present DualNER, a simple and effective framework to make full use of both annotated source language corpus and unlabeled target language text for zero-shot cross-lingual named entity recognition (NER). In particular, we combine two complementary learning paradigms of NER, i.e., sequence labeling and span prediction, into a unified multi-task framework. After obtaining a sufficient NER model trained on the source data, we further train it on the target data in a {\it dual-teaching} manner, in which the pseudo-labels for one task are constructed from the prediction of the other task. Moreover, based on the span prediction, an entity-aware regularization is proposed to enhance the intrinsic cross-lingual alignment between the same entities in different languages. Experiments and analysis demonstrate the effectiveness of our DualNER. Code is available at https://github.com/lemon0830/dualNER.
translated by 谷歌翻译
作为自然语言处理领域(NLP)领域的广泛研究,基于方面的情感分析(ABSA)是预测文本中相对于相应方面所表达的情感的任务。不幸的是,大多数语言缺乏足够的注释资源,因此越来越多的研究人员专注于跨语义方面的情感分析(XABSA)。但是,最近的研究仅集中于跨语性数据对准而不是模型对齐。为此,我们提出了一个新颖的框架CL-XABSA:基于跨语言的情感分析的对比度学习。基于对比度学习,我们在不同的语义空间中关闭具有相同标签的样品之间的距离,从而实现了不同语言的语义空间的收敛。具体而言,我们设计了两种对比策略,即代币嵌入(TL-CTE)和情感水平的对比度学习,对代币嵌入(SL-CTE)的对比度学习,以使源语言和目标语言的语义空间正规化,以使其更加统一。由于我们的框架可以在培训期间以多种语言接收数据集,因此我们的框架不仅可以适应XABSA任务,而且可以针对基于多语言的情感分析(MABSA)进行调整。为了进一步提高模型的性能,我们执行知识蒸馏技术利用未标记的目标语言的数据。在蒸馏XABSA任务中,我们进一步探讨了不同数据(源数据集,翻译数据集和代码切换数据集)的比较有效性。结果表明,所提出的方法在XABSA,蒸馏XABSA和MABSA的三个任务中具有一定的改进。为了获得可重复性,我们的本文代码可在https://github.com/gklmip/cl-xabsa上获得。
translated by 谷歌翻译
Universal cross-lingual sentence embeddings map semantically similar cross-lingual sentences into a shared embedding space. Aligning cross-lingual sentence embeddings usually requires supervised cross-lingual parallel sentences. In this work, we propose mSimCSE, which extends SimCSE to multilingual settings and reveal that contrastive learning on English data can surprisingly learn high-quality universal cross-lingual sentence embeddings without any parallel data. In unsupervised and weakly supervised settings, mSimCSE significantly improves previous sentence embedding methods on cross-lingual retrieval and multilingual STS tasks. The performance of unsupervised mSimCSE is comparable to fully supervised methods in retrieving low-resource languages and multilingual STS. The performance can be further enhanced when cross-lingual NLI data is available. Our code is publicly available at https://github.com/yaushian/mSimCSE.
translated by 谷歌翻译
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark 1 to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
translated by 谷歌翻译
翻译质量估计(QE)是预测机器翻译(MT)输出质量的任务,而无需任何参考。作为MT实际应用中的重要组成部分,这项任务已越来越受到关注。在本文中,我们首先提出了XLMRScore,这是一种基于使用XLM-Roberta(XLMR)模型计算的BertScore的简单无监督的QE方法,同时讨论了使用此方法发生的问题。接下来,我们建议两种减轻问题的方法:用未知令牌和预训练模型的跨语性对准替换未翻译的单词,以表示彼此之间的一致性单词。我们在WMT21 QE共享任务的四个低资源语言对上评估了所提出的方法,以及本文介绍的新的英语FARSI测试数据集。实验表明,我们的方法可以在两个零射击方案的监督基线中获得可比的结果,即皮尔森相关性的差异少于0.01,同时在所有低资源语言对中的平均低资源语言对中的无人看管竞争对手的平均水平超过8%的平均水平超过8%。 。
translated by 谷歌翻译
语言之间的大多数翻译任务都属于无法使用的零资源翻译问题。与两种通用枢轴翻译相比,多语言神经机器翻译(MNMT)可以使用所有语言的共享语义空间进行一通翻译,但通常表现不佳的基于枢轴的方法。在本文中,我们提出了一种新颖的方法,称为NMT(UM4)的统一多语言多语言多种教师模型。我们的方法统一了来源教师,目标老师和枢轴教师模型,以指导零资源翻译的学生模型。来源老师和目标教师迫使学生学习直接来源,以通过源头和目标方面的蒸馏知识进行目标翻译。枢轴教师模型进一步利用单语语料库来增强学生模型。实验结果表明,我们的72个方向模型在WMT基准测试上明显优于先前的方法。
translated by 谷歌翻译
先前的研究证明,跨语性知识蒸馏可以显着提高预训练模型的跨语义相似性匹配任务的性能。但是,在此操作中,学生模型必须大。否则,其性能将急剧下降,从而使部署到内存限制设备的不切实际。为了解决这个问题,我们深入研究了跨语言知识蒸馏,并提出了一个多阶段蒸馏框架,用于构建一个小型但高性能的跨语性模型。在我们的框架中,合并了对比度学习,瓶颈和参数复发策略,以防止在压缩过程中损害性能。实验结果表明,我们的方法可以压缩XLM-R和Minilm的大小超过50 \%,而性能仅降低约1%。
translated by 谷歌翻译
Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. Recently, a few efforts have utilized a simple mark-then-translate method to jointly perform translation and projection by inserting special markers around the labeled spans in the original sentence. However, as far as we are aware, no empirical analysis has been conducted on how this approach compares to traditional annotation projection based on word alignment. In this paper, we present an extensive empirical study across 42 languages and three tasks (QA, NER, and Event Extraction) to evaluate the effectiveness and limitations of both methods, filling an important gap in the literature. Experimental results show that our optimized version of mark-then-translate, which we call EasyProject, is easily applied to many languages and works surprisingly well, outperforming the more complex word alignment-based methods. We analyze several key factors that affect end-task performance, and show EasyProject works well because it can accurately preserve label span boundaries after translation. We will publicly release all our code and data.
translated by 谷歌翻译
In the absence of readily available labeled data for a given task and language, annotation projection has been proposed as one of the possible strategies to automatically generate annotated data which may then be used to train supervised systems. Annotation projection has often been formulated as the task of projecting, on parallel corpora, some labels from a source into a target language. In this paper we present T-Projection, a new approach for annotation projection that leverages large pretrained text2text language models and state-of-the-art machine translation technology. T-Projection decomposes the label projection task into two subtasks: (i) The candidate generation step, in which a set of projection candidates using a multilingual T5 model is generated and, (ii) the candidate selection step, in which the candidates are ranked based on translation probabilities. We evaluate our method in three downstream tasks and five different languages. Our results show that T-projection improves the average F1 score of previous methods by more than 8 points.
translated by 谷歌翻译
对任何人类语言的文本的语法分析通常涉及许多基本的处理任务,例如令牌化,形态标记和依赖性解析。最先进的系统可以在具有大数据集的语言上实现这些任务的高精度,但是对于几乎没有带注释的数据的他的他加禄语等语言的结果很差。为了解决他加禄语语言的此问题,我们研究了在没有带注释的他加禄语数据的情况下使用辅助数据源来创建特定于任务模型的使用。我们还探索了单词嵌入和数据扩展的使用,以提高性能,而只有少量带注释的他加禄语数据可用。我们表明,与最先进的监督基线相比,这些零射击和几乎没有射击的方法在对域内和域外的塔加尔teact文本进行了语法分析方面进行了实质性改进。
translated by 谷歌翻译
我们考虑使用最新的MultieRlex数据集中考虑法律主题分类中的零射击跨语性转移。由于原始数据集包含并行文档,这对于零拍传输不现实是不现实的,因此我们开发了一个没有并行文档的数据集的新版本。我们使用它来表明,基于翻译的方法非常优于多绘制预训练的模型,这是多曲线的最佳先前的零弹性传输方法。我们还开发了一种双语的教师零摄像转移方法,该方法利用了目标语言的其他未标记文档,并且比直接在标记的目标语言文档上进行微调的模型更好。
translated by 谷歌翻译
非洲语言仍然滞留在自然语言处理技术的进步中,是缺乏代表性数据的一个原因,具有可以在语言之间传输信息的技术可以帮助减少缺乏数据问题。本文列车Setswana和Sepedi单语法向量,并使用Vecmap为Setsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssswana-sepedi创建交叉语言嵌入式。 Word Embeddings是字向量,其代表单词作为连续浮动数字,其中语义类似的单词映射到N维空间中的附近点。 Word Embeddings的想法是基于分布假设,即在类似上下文中分发了语义类似的单词(Harris,1954)。通过学习两个单独训练的单丝矢量的共享矢量空间来利用单晶嵌入来利用单晶的嵌入,使得具有类似含义的单词由类似的载体表示。在本文中,我们调查Setswana-Sepedi单声道单词矢量的十字旋转嵌入。我们使用Vecmap中的无监督十字形嵌入式培训Setswana-Sepedi跨语言嵌入式。我们使用语义评估任务评估Setswana-Sepedi交叉词表示的质量。对于语义相似性任务,我们将单词和Simlex任务翻译成SetSwana和Sepedi。我们将此数据集发布为其他研究人员的这项工作的一部分。我们评估嵌入式的内在质量,以确定是否有改进单词嵌入的语义表示。
translated by 谷歌翻译
我们介绍了使用多级知识蒸馏(KD)训练的新的交叉语言信息检索(CLIR)模型。老师和学生是异构的系统 - 前者是依赖于机器翻译和单晶IR的管道,而后者执行单个CLIR操作。我们表明学生可以通过优化两个相应的KD目标来学习多语言表示和CLIR。使用英语唯一的检索器的学习多语言表示是使用一种新颖的跨语言对齐算法来实现,使得贪婪地重新定位教师令牌进行对齐。XOR-TYDI基准测试的评估表明,所提出的模型比具有交叉语言标记的IR数据的微调现有方法更有效,精度为25.4召回@ 5kt。
translated by 谷歌翻译
Real-world tasks are largely composed of multiple models, each performing a sub-task in a larger chain of tasks, i.e., using the output from a model as input for another model in a multi-model pipeline. A model like MATRa performs the task of Crosslingual Transliteration in two stages, using English as an intermediate transliteration target when transliterating between two indic languages. We propose a novel distillation technique, EPIK, that condenses two-stage pipelines for hierarchical tasks into a single end-to-end model without compromising performance. This method can create end-to-end models for tasks without needing a dedicated end-to-end dataset, solving the data scarcity problem. The EPIK model has been distilled from the MATra model using this technique of knowledge distillation. The MATra model can perform crosslingual transliteration between 5 languages - English, Hindi, Tamil, Kannada and Bengali. The EPIK model executes the task of transliteration without any intermediate English output while retaining the performance and accuracy of the MATra model. The EPIK model can perform transliteration with an average CER score of 0.015 and average phonetic accuracy of 92.1%. In addition, the average time for execution has reduced by 54.3% as compared to the teacher model and has a similarity score of 97.5% with the teacher encoder. In a few cases, the EPIK model (student model) can outperform the MATra model (teacher model) even though it has been distilled from the MATra model.
translated by 谷歌翻译
视觉和语言任务在研究界越来越受欢迎,但重点仍主要放在英语上。我们提出了一条管道,该管道利用仅英语视觉语言模型来训练目标语言的单语模型。我们建议扩展Oscar+,该模型利用对象标签作为学习图像文本对齐的锚点,以训练以不同语言的视觉问题回答数据集。我们提出了一种新颖的知识蒸馏方法,以使用并行句子以其他语言来训练模型。与其他在训练阶段的语料库中使用目标语言的模型相比,我们可以利用现有的英语模型使用明显较小的资源将知识转移到目标语言中。我们还以日语和印地语语言发布了一个大规模的视觉问题,回答数据集。尽管我们将工作限制为视觉问题的回答,但我们的模型可以扩展到任何序列级别的分类任务,并且也可以将其扩展到其他语言。本文重点介绍了两种语言,用于视觉问题回答任务 - 日语和印地语。我们的管道表现优于当前的最新模型的相对增加4.4%和13.4%的准确性。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
知识库,例如Wikidata Amass大量命名实体信息,例如多语言标签,这些信息对于各种多语言和跨语义应用程序非常有用。但是,从信息一致性的角度来看,不能保证这样的标签可以跨语言匹配,从而极大地损害了它们对机器翻译等字段的有用性。在这项工作中,我们研究了单词和句子对准技术的应用,再加上匹配算法,以将从Wikidata提取的10种语言中提取的跨语性实体标签对齐。我们的结果表明,Wikidata的主标签之间的映射将通过任何使用的方法都大大提高(F1分数最高20美元)。我们展示了依赖句子嵌入的方法如何超过所有其他脚本,甚至在不同的脚本上。我们认为,这种技术在测量标签对的相似性上的应用,再加上富含高质量实体标签的知识库,是机器翻译的绝佳资产。
translated by 谷歌翻译