混音是在语音事件中混合两种或多种语言的一种现象,并且在多语言社会中很普遍。鉴于代码混合的低资源性质,代码混合文本的机器生成是数据增强的普遍方法。但是,评估该机器生成的代码混合文本的质量是一个开放问题。在与INLG2022相处的共享任务的Hinglisheval提交时,我们尝试通过预测代码混合质量的评分来构建影响合成生成的代码混合文本质量的模型因素。
translated by 谷歌翻译
代码混合的文本数据包括带有来自多种语言的单词或短语的句子。全世界大多数多种语言社区都使用多种语言进行交流,而英语通常是其中之一。Hinglish是由印地语和英语组成的代码混合文本,但用罗马脚本编写。本文旨在确定影响系统生成的代码混合文本数据质量的因素。对于Hinglisheval任务,提出的模型使用多语言BERT来找到合成生成和人类生成的句子之间的相似性,以预测合成生成的hinglish句子的质量。
translated by 谷歌翻译
通常,在自然语言处理领域,识别指定实体是一项实用且具有挑战性的任务。由于混合的性质导致语言复杂性,因此在代码混合文本上命名的实体识别是进一步的挑战。本文介绍了CMNERONE团队在Semeval 2022共享任务11 Multiconer的提交。代码混合的NER任务旨在识别代码混合数据集中的命名实体。我们的工作包括在代码混合数据集上的命名实体识别(NER),来利用多语言数据。我们的加权平均F1得分为0.7044,即比基线大6%。
translated by 谷歌翻译
在本文中,我们介绍了一项针对INLG 2022代挑战(Genchal)提交的系统,该系统涉及对合成的质量评估合成生成的代码混合的Hinglish文本的质量评估。我们实施了基于BISTM的神经网络模型,以预测合成Hinglish数据集的平均评分评分和分歧分数。在我们的模型中,我们将单词嵌入式用于英语和印地语数据,以及用于Hinglish Data的热门编码。我们在平均评分评分预测任务中达到了0.11的F1分数,平均平方误差为6.0。在分歧分数预测的任务中,我们的F1得分为0.18,平均误差为5.0。
translated by 谷歌翻译
In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2019) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.
translated by 谷歌翻译
Transformer language models (TLMs) are critical for most NLP tasks, but they are difficult to create for low-resource languages because of how much pretraining data they require. In this work, we investigate two techniques for training monolingual TLMs in a low-resource setting: greatly reducing TLM size, and complementing the masked language modeling objective with two linguistically rich supervised tasks (part-of-speech tagging and dependency parsing). Results from 7 diverse languages indicate that our model, MicroBERT, is able to produce marked improvements in downstream task evaluations relative to a typical monolingual TLM pretraining approach. Specifically, we find that monolingual MicroBERT models achieve gains of up to 18% for parser LAS and 11% for NER F1 compared to a multilingual baseline, mBERT, while having less than 1% of its parameter count. We conclude reducing TLM parameter count and using labeled data for pretraining low-resource TLMs can yield large quality benefits and in some cases produce models that outperform multilingual approaches.
translated by 谷歌翻译
代码切换(CS),普遍存在的现象,由于在多语种社区中提供的易于通信,仍然是语言处理中的被解读的问题。其背后的主要原因是:(1)利用大型预磨削多语言模型的最小努力,(2)缺乏注释数据。 CS中多语种模型性能低性能的区别案例是导致切换点的语言中的句子内混合。我们首先将两个序列标记任务 - 在4个不同的语言对中,带有套件的预磨料模型,以识别问题,然后选择最佳的执行模型,CHAR-BERT,其中(寻址(1))。然后,我们提出了一种自我训练方法,通过利用未解释的数据(寻址(2))来利用开关点偏置来重新利用开关点偏压来重新利用开关点偏置。我们终于证明我们的方法通过降低切换点性能之间的差距来对两个任务进行良好的,同时保留两种不同语言对中的两个不同语言对。我们的代码可在此处提供:https://github.com/pc09/emnlp2021-switch-point-biased.caString。
translated by 谷歌翻译
口语内容中的话语码切换(CS)的普及性具有强制ASR系统来处理混合输入。然而,设计CS-ASR具有许多挑战,主要原因是数据稀缺,语法结构复杂性和不匹配以及不平衡的语言使用分配。最近的ASR研究表明E2E-ASR使用多语种数据来处理CS现象的少量CS数据。但是,对CS数据的依赖仍然存在。在这项工作中,我们提出了一种方法来增加用于人工生成的CS文本的单格式数据以改善不同的语音模块。我们在利用对齐的转换对的同时基于对等效约束理论的方法,以生成语法有效的CS内容。我们的经验结果表明,两种生态和嘈杂的CS测试集,在困惑中的相对增益为29-34%,而在WER中约为2%。最后,人类评估表明,人类可以获得83.8%的生成数据。
translated by 谷歌翻译
已经开发了许多方法,以通过消除社交媒体平台的庸俗,令人反感和激烈的评论来监测现代岁月中的消极性传播。然而,存在相对较少的研究,这些研究会收敛于拥抱积极性,加强在线论坛中的支持性和放心内容。因此,我们建议创建英国kannada希望语音数据集,Kanhope并比较几个实验来基准数据集。 DataSet由6,176个用户生成的评论组成,代码混合kannada从YouTube刮擦并手动注释为轴承希望语音或不希望的演讲。此外,我们介绍了DC-BERT4HOPE,一种使用Kanhope的英语翻译进行额外培训的双通道模型,以促进希望语音检测。该方法实现了0.756的加权F1分数,更好的其他模型。从此,卡霍普旨在促进坎卡达的研究,同时促进研究人员,以鼓励,积极和支持的在线内容中务实的方法。
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
我们提供了一个新的Twitter数据语料库,该数据注释了西班牙语和英语之间的代码开关和借用。该语料库包含带有代码开关,借款和命名实体的令牌级别注释的9,500条推文。该语料库与先前的代码开关情况有所不同,因为我们试图清楚地定义和注释codeswitching and Loarding和借贷之间的边界,并且在其他单语上下文中使用时,请不要将常见的“互联网说话”('lol'等)视为代码开关。结果是一个语料库,可以在一个数据集中的Twitter上进行西班牙语 - 英语借款和代码开关的研究和建模。我们提出了使用基于变压器的语言模型对该语料库的标签进行建模的基线得分。注释本身由CC by 4.0许可发布,而其适用的文本则根据Twitter服务条款分发。
translated by 谷歌翻译
在大量人员中,在线社交媒体(OSMS)消费的广泛上升构成了遏制这些平台上仇恨内容的传播的关键问题。随着多种语言的效果越来越多,检测和表征仇恨的任务变得更加复杂。代码混合文本的微妙变化以及切换脚本仅增加了复杂性。本文介绍了哈索克2021多语种推特仇恨语音检测挑战的解决方案,由Team Precog IIIT Hyderabad。我们采用基于多语言变压器的方法,并为所有6个子任务描述了我们的架构作为挑战的一部分。在参加所有子特设券的6支球队中,我们的提交总体排名第3。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
Code-Switching, a common phenomenon in written text and conversation, has been studied over decades by the natural language processing (NLP) research community. Initially, code-switching is intensively explored by leveraging linguistic theories and, currently, more machine-learning oriented approaches to develop models. We introduce a comprehensive systematic survey on code-switching research in natural language processing to understand the progress of the past decades and conceptualize the challenges and tasks on the code-switching topic. Finally, we summarize the trends and findings and conclude with a discussion for future direction and open questions for further investigation.
translated by 谷歌翻译
Recent work attributes progress in NLP to large language models (LMs) with increased model size and large quantities of pretraining data. Despite this, current state-of-the-art LMs for Hebrew are both under-parameterized and under-trained compared to LMs in other languages. Additionally, previous work on pretrained Hebrew LMs focused on encoder-only models. While the encoder-only architecture is beneficial for classification tasks, it does not cater well for sub-word prediction tasks, such as Named Entity Recognition, when considering the morphologically rich nature of Hebrew. In this paper we argue that sequence-to-sequence generative architectures are more suitable for LLMs in the case of morphologically rich languages (MRLs) such as Hebrew. We demonstrate that by casting tasks in the Hebrew NLP pipeline as text-to-text tasks, we can leverage powerful multilingual, pretrained sequence-to-sequence models as mT5, eliminating the need for a specialized, morpheme-based, separately fine-tuned decoder. Using this approach, our experiments show substantial improvements over previously published results on existing Hebrew NLP benchmarks. These results suggest that multilingual sequence-to-sequence models present a promising building block for NLP for MRLs.
translated by 谷歌翻译
在任何翻译工作流程中,从源到目标的域知识保存至关重要。在翻译行业中,接收高度专业化的项目是很常见的,那里几乎没有任何平行的内域数据。在这种情况下,没有足够的内域数据来微调机器翻译(MT)模型,生成与相关上下文一致的翻译很具有挑战性。在这项工作中,我们提出了一种新颖的方法,用于域适应性,以利用最新的审计语言模型(LMS)来用于特定于域的MT的域数据增强,并模拟(a)的(a)小型双语数据集的域特征,或(b)要翻译的单语源文本。将这个想法与反翻译相结合,我们可以为两种用例生成大量的合成双语内域数据。为了进行调查,我们使用最先进的变压器体系结构。我们采用混合的微调来训练模型,从而显着改善了内域文本的翻译。更具体地说,在这两种情况下,我们提出的方法分别在阿拉伯语到英语对阿拉伯语言对上分别提高了大约5-6个BLEU和2-3 BLEU。此外,人类评估的结果证实了自动评估结果。
translated by 谷歌翻译
With the recent advance in neural machine translation demonstrating its importance, research on quality estimation (QE) has been steadily progressing. QE aims to automatically predict the quality of machine translation (MT) output without reference sentences. Despite its high utility in the real world, there remain several limitations concerning manual QE data creation: inevitably incurred non-trivial costs due to the need for translation experts, and issues with data scaling and language expansion. To tackle these limitations, we present QUAK, a Korean-English synthetic QE dataset generated in a fully automatic manner. This consists of three sub-QUAK datasets QUAK-M, QUAK-P, and QUAK-H, produced through three strategies that are relatively free from language constraints. Since each strategy requires no human effort, which facilitates scalability, we scale our data up to 1.58M for QUAK-P, H and 6.58M for QUAK-M. As an experiment, we quantitatively analyze word-level QE results in various ways while performing statistical analysis. Moreover, we show that datasets scaled in an efficient way also contribute to performance improvements by observing meaningful performance gains in QUAK-M, P when adding data up to 1.58M.
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
本文介绍了INLG 2022的Hinglisheval挑战的系统描述。该任务的目的是研究影响代码混合文本生成系统质量的因素。该任务分为两个子任务,质量评级预测和注释者的分歧预测预测。我们尝试使用句子级嵌入来解决这些任务,这些任务是通过平均汇总我们文本中所有输入令牌的上下文化词嵌入而获得的。我们在为各自任务生成的嵌入式外面尝试了各种分类器。我们表现最好的系统在子任务B上排名第一,在子任务A上排名第三。
translated by 谷歌翻译
我们描述了关于多语言核心分辨率的CRAC 2022共享任务的获胜提交。我们的系统首先求解了提及检测,然后使用先进的最大化方法在检索到的跨度上链接,并且这两个任务均与共享变压器的权重进行微调。我们报告了微调各种预审预告额的结果。此贡献的中心是微调的多语言模型。我们发现了一个具有足够大的编码器的大型多语言模型,可以全面提高所有数据集的性能,因此不仅限于代表性不足的语言或类型上相对语言的群体。源代码可在https://github.com/ufal/crac2022-corpipe上获得。
translated by 谷歌翻译