Twitter包含来自现实世界中的大量语言数据。我们检查了Twitter的低资源语言(例如本地印尼语)的用户生成的内容。为了使NLP在印尼语中工作,它必须考虑本地方言,地理环境和区域文化影响印尼语言。本文确定了我们在构建本地印尼NLP数据集时面临的问题。此外,我们正在开发一个用于创建,收集和分类NLP本地印尼数据集的框架。使用Twitter的地理位置工具自动注释。
translated by 谷歌翻译
情感分析是NLP中研究最广泛的应用程序之一,但大多数工作都集中在具有大量数据的语言上。我们介绍了尼日利亚的四种口语最广泛的语言(Hausa,Igbo,Nigerian-Pidgin和Yor \'ub \'a)的第一个大规模的人类通知的Twitter情感数据集,该数据集由大约30,000个注释的推文组成(以及每种语言的大约30,000个)(以及14,000尼日利亚猎人),其中包括大量的代码混合推文。我们提出了文本收集,过滤,处理和标记方法,使我们能够为这些低资源语言创建数据集。我们评估了数据集上的预训练模型和转移策略。我们发现特定于语言的模型和语言适应性芬通常表现最好。我们将数据集,训练的模型,情感词典和代码释放到激励措施中,以代表性不足的语言进行情感分析。
translated by 谷歌翻译
Code-Switching, a common phenomenon in written text and conversation, has been studied over decades by the natural language processing (NLP) research community. Initially, code-switching is intensively explored by leveraging linguistic theories and, currently, more machine-learning oriented approaches to develop models. We introduce a comprehensive systematic survey on code-switching research in natural language processing to understand the progress of the past decades and conceptualize the challenges and tasks on the code-switching topic. Finally, we summarize the trends and findings and conclude with a discussion for future direction and open questions for further investigation.
translated by 谷歌翻译
Language identification (LID) is a crucial precursor for NLP, especially for mining web data. Problematically, most of the world's 7000+ languages today are not covered by LID technologies. We address this pressing issue for Africa by introducing AfroLID, a neural LID toolkit for $517$ African languages and varieties. AfroLID exploits a multi-domain web dataset manually curated from across 14 language families utilizing five orthographic systems. When evaluated on our blind Test set, AfroLID achieves 95.89 F_1-score. We also compare AfroLID to five existing LID tools that each cover a small number of African languages, finding it to outperform them on most languages. We further show the utility of AfroLID in the wild by testing it on the acutely under-served Twitter domain. Finally, we offer a number of controlled case studies and perform a linguistically-motivated error analysis that allow us to both showcase AfroLID's powerful capabilities and limitations.
translated by 谷歌翻译
本文介绍了对土耳其语可用于的语料库和词汇资源的全面调查。我们审查了广泛的资源,重点关注公开可用的资源。除了提供有关可用语言资源的信息外,我们还提供了一组建议,并确定可用于在土耳其语言学和自然语言处理中进行研究和建筑应用的数据中的差距。
translated by 谷歌翻译
Pre-training large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. Although this method has proven to be effective for many domains, it might not always provide desirable benefits. In this paper, we study the effects of hateful pre-training on low-resource hate speech classification tasks. While previous studies on the English language have emphasized its importance, we aim to augment their observations with some non-obvious insights. We evaluate different variations of tweet-based BERT models pre-trained on hateful, non-hateful, and mixed subsets of a 40M tweet dataset. This evaluation is carried out for the Indian languages Hindi and Marathi. This paper is empirical evidence that hateful pre-training is not the best pre-training option for hate speech detection. We show that pre-training on non-hateful text from the target domain provides similar or better results. Further, we introduce HindTweetBERT and MahaTweetBERT, the first publicly available BERT models pre-trained on Hindi and Marathi tweets, respectively. We show that they provide state-of-the-art performance on hate speech classification tasks. We also release hateful BERT for the two languages and a gold hate speech evaluation benchmark HateEval-Hi and HateEval-Mr consisting of manually labeled 2000 tweets each. The models and data are available at https://github.com/l3cube-pune/MarathiNLP .
translated by 谷歌翻译
意大利的特征是欧洲一种一种独一无二的语言多样性格局,该景观暗中编码了当地知识,文化传统,艺术表达及其演讲者的历史。但是,意大利的30多种语言品种有几代人内消失的风险。语言技术在保存濒危语言方面具有主要作用,但是目前,它在资源不足,主要缺乏标准拼写术的品种中挣扎,主要用于口语环境。在本文中,我们介绍了意大利的语言背景,并讨论了意大利语言品种开发NLP技术面临的挑战。我们提供潜在的方向,并倡导从以机器为中心转向以说话者为中心的NLP的范式转变。最后,我们建议建立一个当地社区,旨在为意大利语言和方言的言语和语言技术负责,参与式发展。
translated by 谷歌翻译
BERT,ROBERTA或GPT-3等复杂的基于注意力的语言模型的外观已允许在许多场景中解决高度复杂的任务。但是,当应用于特定域时,这些模型会遇到相当大的困难。诸如Twitter之类的社交网络就是这种情况,Twitter是一种不断变化的信息流,以非正式和复杂的语言编写的信息流,鉴于人类的重要作用,每个信息都需要仔细评估,即使人类也需要理解。通过自然语言处理解决该领域的任务涉及严重的挑战。当将强大的最先进的多语言模型应用于这种情况下,特定语言的细微差别用来迷失翻译。为了面对这些挑战,我们提出了\ textbf {bertuit},这是迄今为止针对西班牙语提出的较大变压器,使用Roberta Optimization进行了230m西班牙推文的大规模数据集进行了预培训。我们的动机是提供一个强大的资源,以更好地了解西班牙Twitter,并用于专注于该社交网络的应用程序,特别强调致力于解决该平台中错误信息传播的解决方案。对Bertuit进行了多个任务评估,并与M-Bert,XLM-Roberta和XLM-T进行了比较,该任务非常具有竞争性的多语言变压器。在这种情况下,使用应用程序显示了我们方法的实用性:一种可视化骗局和分析作者群体传播虚假信息的零击方法。错误的信息在英语以外的其他语言等平台上疯狂地传播,这意味着在英语说话之外转移时,变形金刚的性能可能会受到影响。
translated by 谷歌翻译
我们提供了一个新的Twitter数据语料库,该数据注释了西班牙语和英语之间的代码开关和借用。该语料库包含带有代码开关,借款和命名实体的令牌级别注释的9,500条推文。该语料库与先前的代码开关情况有所不同,因为我们试图清楚地定义和注释codeswitching and Loarding和借贷之间的边界,并且在其他单语上下文中使用时,请不要将常见的“互联网说话”('lol'等)视为代码开关。结果是一个语料库,可以在一个数据集中的Twitter上进行西班牙语 - 英语借款和代码开关的研究和建模。我们提出了使用基于变压器的语言模型对该语料库的标签进行建模的基线得分。注释本身由CC by 4.0许可发布,而其适用的文本则根据Twitter服务条款分发。
translated by 谷歌翻译
The widespread of offensive content online, such as hate speech and cyber-bullying, is a global phenomenon. This has sparked interest in the artificial intelligence (AI) and natural language processing (NLP) communities, motivating the development of various systems trained to detect potentially harmful content automatically. These systems require annotated datasets to train the machine learning (ML) models. However, with a few notable exceptions, most datasets on this topic have dealt with English and a few other high-resource languages. As a result, the research in offensive language identification has been limited to these languages. This paper addresses this gap by tackling offensive language identification in Sinhala, a low-resource Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level, improving the explainability of the ML models. SOLD is the first large publicly available offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
translated by 谷歌翻译
In recent years, multilingual pre-trained language models have gained prominence due to their remarkable performance on numerous downstream Natural Language Processing tasks (NLP). However, pre-training these large multilingual language models requires a lot of training data, which is not available for African Languages. Active learning is a semi-supervised learning algorithm, in which a model consistently and dynamically learns to identify the most beneficial samples to train itself on, in order to achieve better optimization and performance on downstream tasks. Furthermore, active learning effectively and practically addresses real-world data scarcity. Despite all its benefits, active learning, in the context of NLP and especially multilingual language models pretraining, has received little consideration. In this paper, we present AfroLM, a multilingual language model pretrained from scratch on 23 African languages (the largest effort to date) using our novel self-active learning framework. Pretrained on a dataset significantly (14x) smaller than existing baselines, AfroLM outperforms many multilingual pretrained language models (AfriBERTa, XLMR-base, mBERT) on various NLP downstream tasks (NER, text classification, and sentiment analysis). Additional out-of-domain sentiment analysis experiments show that \textbf{AfroLM} is able to generalize well across various domains. We release the code source, and our datasets used in our framework at https://github.com/bonaventuredossou/MLM_AL.
translated by 谷歌翻译
预先训练的上下文化文本表示模型学习自然语言的有效表示,以使IT机器可以理解。在注意机制的突破之后,已经提出了新一代预磨模的模型,以便自变压器引入以来实现了良好的性能。来自变压器(BERT)的双向编码器表示已成为语言理解的最先进的模型。尽管取得了成功,但大多数可用的型号已经在印度欧洲语言中培训,但是对代表性的语言和方言的类似研究仍然稀疏。在本文中,我们调查了培训基于单语言变换器的语言模型的可行性,以获得代表语言的特定重点是突尼斯方言。我们评估了我们的语言模型对情感分析任务,方言识别任务和阅读理解问答任务。我们表明使用嘈杂的Web爬网数据而不是结构化数据(维基百科,文章等)更方便这些非标准化语言。此外,结果表明,相对小的Web爬网数据集导致与使用较大数据集获得的那些表现相同的性能。最后,我们在所有三个下游任务中达到或改善了最先进的Tunbert模型。我们释放出Tunbert净化模型和用于微调的数据集。
translated by 谷歌翻译
We present NusaCrowd, a collaborative initiative to collect and unite existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have has brought together 137 datasets and 117 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their effectiveness has been demonstrated in multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and its local languages. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and its local languages. Our work is intended to help advance natural language processing research in under-represented languages.
translated by 谷歌翻译
JamPatoisNLI provides the first dataset for natural language inference in a creole language, Jamaican Patois. Many of the most-spoken low-resource languages are creoles. These languages commonly have a lexicon derived from a major world language and a distinctive grammar reflecting the languages of the original speakers and the process of language birth by creolization. This gives them a distinctive place in exploring the effectiveness of transfer from large monolingual or multilingual pretrained models. While our work, along with previous work, shows that transfer from these models to low-resource languages that are unrelated to languages in their training set is not very effective, we would expect stronger results from transfer to creoles. Indeed, our experiments show considerably better results from few-shot learning of JamPatoisNLI than for such unrelated languages, and help us begin to understand how the unique relationship between creoles and their high-resource base languages affect cross-lingual transfer. JamPatoisNLI, which consists of naturally-occurring premises and expert-written hypotheses, is a step towards steering research into a traditionally underserved language and a useful benchmark for understanding cross-lingual NLP.
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
为了解决检测到令人反感的评论/帖子的难题,这些评论/帖子具有很多非正式的,非结构化,错误的和码混合,我们在本研究论文中介绍了两种发明方法。社交媒体平台上的攻击性评论/帖子,可以影响个人,团体或未成年人。为了对两个受欢迎的Dravidian语言,泰米尔和马拉雅拉姆分类,作为哈索克的一部分 - Dravidiancodemix Fire 2021共享任务,我们采用了两个基于变压器的原型,该原型成功地站在前8名以获得所有任务。可以查看和使用我们方法的代码。
translated by 谷歌翻译
社交媒体在现代社会中尤其是在西方世界中的政策制定方面已经变得极其影响力(例如,48%的欧洲人每天或几乎每天都使用社交媒体)。 Twitter之类的平台使用户可以关注政客,从而使公民更多地参与政治讨论。同样,政客们使用Twitter来表达他们的观点,在当前主题上进行辩论,并促进其政治议程,以影响选民行为。先前的研究表明,传达负面情绪的推文可能会更频繁地转发。在本文中,我们试图分析来自不同国家的政客的推文,并探索他们的推文是否遵循相同的趋势。利用最先进的预训练的语言模型,我们对从希腊,西班牙和英国的成千上万的推文进行了情感分析,包括权威的行政部门。我们通过系统地探索和分析有影响力和不流行的推文之间的差异来实现这一目标。我们的分析表明,政治家的负面推文更广泛地传播,尤其是在最近的时代,并突出了情感和受欢迎程度相交的有趣趋势。
translated by 谷歌翻译
非结构化的文本中存在大量的位置信息,例如社交媒体帖子,新闻报道,科学文章,网页,旅行博客和历史档案。地理学是指识别文本中的位置参考并识别其地理空间表示的过程。虽然地理标准可以使许多领域受益,但仍缺少特定应用程序的摘要。此外,缺乏对位置参考识别方法的现有方法的全面审查和比较,这是地理验证的第一个和核心步骤。为了填补这些研究空白,这篇综述首先总结了七个典型的地理应用程序域:地理信息检索,灾难管理,疾病监视,交通管理,空间人文,旅游管理和犯罪管理。然后,我们通过将这些方法分类为四个组,以基于规则的基于规则,基于统计学学习的基于统计学学习和混合方法将这些方法分类为四个组,从而回顾了现有的方法参考识别方法。接下来,我们彻底评估了27种最广泛使用的方法的正确性和计算效率,该方法基于26个公共数据集,其中包含不同类型的文本(例如,社交媒体帖子和新闻报道),包含39,736个位置参考。这项彻底评估的结果可以帮助未来的方法论发展以获取位置参考识别,并可以根据应用需求指导选择适当方法的选择。
translated by 谷歌翻译
已经开发了许多方法,以通过消除社交媒体平台的庸俗,令人反感和激烈的评论来监测现代岁月中的消极性传播。然而,存在相对较少的研究,这些研究会收敛于拥抱积极性,加强在线论坛中的支持性和放心内容。因此,我们建议创建英国kannada希望语音数据集,Kanhope并比较几个实验来基准数据集。 DataSet由6,176个用户生成的评论组成,代码混合kannada从YouTube刮擦并手动注释为轴承希望语音或不希望的演讲。此外,我们介绍了DC-BERT4HOPE,一种使用Kanhope的英语翻译进行额外培训的双通道模型,以促进希望语音检测。该方法实现了0.756的加权F1分数,更好的其他模型。从此,卡霍普旨在促进坎卡达的研究,同时促进研究人员,以鼓励,积极和支持的在线内容中务实的方法。
translated by 谷歌翻译
Spanish is one of the most spoken languages in the globe, but not necessarily Spanish is written and spoken in the same way in different countries. Understanding local language variations can help to improve model performances on regional tasks, both understanding local structures and also improving the message's content. For instance, think about a machine learning engineer who automatizes some language classification task on a particular region or a social scientist trying to understand a regional event with echoes on social media; both can take advantage of dialect-based language models to understand what is happening with more contextual information hence more precision. This manuscript presents and describes a set of regionalized resources for the Spanish language built on four-year Twitter public messages geotagged in 26 Spanish-speaking countries. We introduce word embeddings based on FastText, language models based on BERT, and per-region sample corpora. We also provide a broad comparison among regions covering lexical and semantical similarities; as well as examples of using regional resources on message classification tasks.
translated by 谷歌翻译