我们提出了多语言开放文本(MOT),这是一种新的多语言语料库,其中包含44种语言的文本,其中许多语言限制了现有的文本资源用于自然语言处理。该语料库的第一个版本包含超过280万篇新闻文章,并在2001 - 2022年之间发表了另外100万个短片段(照片标题,视频描述等),并从美国之声网站收集。我们描述了收集,过滤和处理数据的过程。原始材料在公共领域,我们的收藏品使用Creative Commons许可证(CC By 4.0)获得许可,并且用于创建该语料库的所有软件均在MIT许可证下发布。随着其他文档的发布,该语料库将定期更新。
translated by 谷歌翻译
本文介绍了对土耳其语可用于的语料库和词汇资源的全面调查。我们审查了广泛的资源,重点关注公开可用的资源。除了提供有关可用语言资源的信息外,我们还提供了一组建议,并确定可用于在土耳其语言学和自然语言处理中进行研究和建筑应用的数据中的差距。
translated by 谷歌翻译
世界各地的隐私法律和法规的景观是复杂而不断变化的。国家和超国家法律,协议,法令和其他政府发行的规则构成了公司必须遵循的拼凑而成才能在国际上进行运作。为了检查该拼凑而成的状态和演变,我们介绍了1,043条隐私法,法规和准则的政府隐私指示语料库或GPI语料库,涵盖了182个司法管辖区。该语料库可以对法律焦点进行大规模定量和定性检查。我们检查了创建GPI的时间分布,并说明了过去50年中隐私立法的急剧增加,尽管较细粒度的检查表明,增加的速度取决于GPIS所说的个人数据类型。我们的探索还表明,大多数隐私法分别解决了相对较少的个人数据类型,这表明全面的隐私立法仍然很少见。此外,主题建模结果显示了GPI中常见主题的普遍性,例如财务,医疗保健和电信。最后,我们将语料库释放到研究界,以促进进一步的研究。
translated by 谷歌翻译
与简单英语的德国同行“莱希特·斯普拉奇(Leichte Sprache)”是一种旨在促进复杂的书面语言的受监管语言,否则不同的人群将无法访问。我们为简单德语 - 德语提供了一个新的与句子一致的单语语料库。它包含多个使用自动句子对准方法对齐的文档对准源。我们根据手动标记的对齐文档子集评估我们的对齐方式。通过F1得分衡量的句子对齐质量超过了先前的工作。我们根据CC BY-SA和MIT许可证的随附代码发布数据集。
translated by 谷歌翻译
近年来,基于变压器的模型已导致自然语言处理的语言建模取得重大进步。但是,他们需要大量的数据接受(预先)训练,并且除英语以外的语言中缺乏语料库。最近,一些计划提出了从自动网络爬行获得的多语言数据集。但是,西班牙语的结果具有重要的缺点,因为与其他语言相比,它们要么太小,要么呈现出较低的质量,从而获得了次优的清洁和重复数据删除。在本文中,我们介绍了Escorpius,这是一种西班牙爬行语料库,该语料库是从附近的1 pb普通爬网数据中获得的。它是西班牙语中最广泛的语料库,其提取,纯化和重复数据删除的质量水平。我们的数据策划过程涉及一条新型的高度平行清洁管道,并包含一系列重复数据删除机制,以确保文档和段落边界的完整性。此外,我们同时维护源网页URL和WARC Shard Origin URL,以抱怨欧盟法规。 Escorpius已根据CC BY-NC-ND 4.0许可发布,可在HuggingFace上获得。
translated by 谷歌翻译
This preprint describes work in progress on LR-Sum, a new permissively-licensed dataset created with the goal of enabling further research in automatic summarization for less-resourced languages. LR-Sum contains human-written summaries for 40 languages, many of which are less-resourced. We describe our process for extracting and filtering the dataset from the Multilingual Open Text corpus (Palen-Michel et al., 2022). The source data is public domain newswire collected from from Voice of America websites, and LR-Sum is released under a Creative Commons license (CC BY 4.0), making it one of the most openly-licensed multilingual summarization datasets. We describe how we plan to use the data for modeling experiments and discuss limitations of the dataset.
translated by 谷歌翻译
我们介绍Samanantar,是最大的公开可用的并行Corpora Collection,用于指示语言。该集合中的英语和11个上线语言之间总共包含4970万句对(来自两种语言系列)。具体而言,我们从现有的公共可用并行基层编译1240万句对,另外,从网络上挖掘3740万句对,导致4倍增加。我们通过组合许多语料库,工具和方法来挖掘网站的并行句子:(a)Web爬行单格式语料库,(b)文档OCR,用于从扫描的文档中提取句子,(c)用于对齐句子的多语言表示模型,以及(d)近似最近的邻居搜索搜索大量句子。人类评估新矿业的Corpora的样本验证了11种语言的高质量平行句子。此外,我们使用英语作为枢轴语言,从英式并行语料库中提取所有55个指示语言对之间的834百万句子对。我们培训了跨越Samanantar上所有这些语言的多语种NMT模型,这在公开可用的基准上表现出现有的模型和基准,例如弗洛雷斯,建立萨曼塔尔的效用。我们的数据和模型可在Https://indicnlp.ai4bharat.org/samanantar/上公开提供,我们希望他们能够帮助推进NMT和Multibingual NLP的研究。
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
意大利的特征是欧洲一种一种独一无二的语言多样性格局,该景观暗中编码了当地知识,文化传统,艺术表达及其演讲者的历史。但是,意大利的30多种语言品种有几代人内消失的风险。语言技术在保存濒危语言方面具有主要作用,但是目前,它在资源不足,主要缺乏标准拼写术的品种中挣扎,主要用于口语环境。在本文中,我们介绍了意大利的语言背景,并讨论了意大利语言品种开发NLP技术面临的挑战。我们提供潜在的方向,并倡导从以机器为中心转向以说话者为中心的NLP的范式转变。最后,我们建议建立一个当地社区,旨在为意大利语言和方言的言语和语言技术负责,参与式发展。
translated by 谷歌翻译
我们介绍了Paranames,这是一种多语言并行名称资源,由1.18亿个名称组成,涉及400种语言。为1360万个实体提供了名称,这些实体映射到标准化实体类型(每/loc/org)。使用Wikidata作为来源,我们创建了此类类型的最大资源。我们描述了我们过滤和标准化数据以提供最佳质量的方法。PANAMES对于多语言语言处理非常有用,既可以定义名称翻译/音译的任务,又可以作为任务的补充数据,例如命名实体识别和链接。我们通过训练与英文和英语的规范名称翻译的多语言模型来展示对照群的应用。我们的资源是根据https://github.com/bltlab/paranames发布的创意共享许可证(CC By 4.0)发布的。
translated by 谷歌翻译
While the NLP community is generally aware of resource disparities among languages, we lack research that quantifies the extent and types of such disparity. Prior surveys estimating the availability of resources based on the number of datasets can be misleading as dataset quality varies: many datasets are automatically induced or translated from English data. To provide a more comprehensive picture of language resources, we examine the characteristics of 156 publicly available NLP datasets. We manually annotate how they are created, including input text and label sources and tools used to build them, and what they study, tasks they address and motivations for their creation. After quantifying the qualitative NLP resource gap across languages, we discuss how to improve data collection in low-resource languages. We survey language-proficient NLP researchers and crowd workers per language, finding that their estimated availability correlates with dataset availability. Through crowdsourcing experiments, we identify strategies for collecting high-quality multilingual data on the Mechanical Turk platform. We conclude by making macro and micro-level suggestions to the NLP community and individual researchers for future multilingual data development.
translated by 谷歌翻译
ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale.
translated by 谷歌翻译
两个关键假设塑造了排名检索的通常视图:(1)搜索者可以为他们希望看到的文档中的疑问选择单词,并且(2)排名检索的文档就足以,因为搜索者将足够就足够了能够认识到他们希望找到的那些。当要搜索的文档处于搜索者未知的语言时,既不是真的。在这种情况下,需要跨语言信息检索(CLIR)。本章审查了艺术技术的交流信息检索,并概述了一些开放的研究问题。
translated by 谷歌翻译
This paper presents the OPUS ecosystem with a focus on the development of open machine translation models and tools, and their integration into end-user applications, development platforms and professional workflows. We discuss our on-going mission of increasing language coverage and translation quality, and also describe on-going work on the development of modular translation models and speed-optimized compact solutions for real-time translation on regular desktops and small devices.
translated by 谷歌翻译
在学术界,抄袭肯定不是一个新兴的关注,但它随着互联网的普及和对全球内容来源的易于访问而变得更大的程度,使人类干预不足。尽管如此,由于计算机辅助抄袭检测,抄袭远远远非是一个未被解除的问题,目前是一个有效的研究领域,该研究落在信息检索(IR)和自然语言处理(NLP)领域。许多软件解决方案有助于满足这项任务,本文概述了用于阿拉伯语,法国和英语学术和教育环境的抄袭检测系统。比较在八个系统之间持有,并在检测不同来源的三个混淆水平的特征,可用性,技术方面以及它们的性能之间进行:逐字,释义和跨语言抄袭。在本研究的背景下也进行了对技术形式的抄袭技术形式的关注检查。此外,还提供了对不同作者提出的抄袭类型和分类的调查。
translated by 谷歌翻译
The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.
translated by 谷歌翻译
成对图像和文本的大型数据集越来越受到愿景和愿景和语言任务的通用表示。此类数据集已通过查询搜索引擎或收集HTML Alt-Text构建 - 由于Web数据是嘈杂的,因此它们需要复杂的过滤管道来维护质量。我们探索备用数据源以收集具有最小滤波的高质量数据。我们介绍Redcaps - 从Reddit收集的12M图像文本对的大规模数据集。来自Reddit的图像和标题描绘并描述了各种各样的物体和场景。我们从手动策划的FuSoddits集中收集数据,这为粗略图像标签提供给粗略图像标签,并允许我们转向数据集组合而不标记单个实例。我们展示Redcaps培训的标题模型产生了人类优选的丰富和各种标题,并学习转移到许多下游任务的视觉表现。
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
创新是经济和社会发展的主要驱动力,有关多种创新的信息嵌入了专利和专利申请的半结构化数据中。尽管在专利数据中表达的创新的影响和新颖性很难通过传统手段来衡量,但ML提供了一套有希望的技术来评估新颖性,汇总贡献和嵌入语义。在本文中,我们介绍了Harvard USPTO专利数据集(HUPD),该数据集是2004年至2004年之间提交给美国专利商业办公室(USPTO)的大型,结构化和多用途的英语专利专利申请。 2018年。HUPD拥有超过450万张专利文件,是可比的Coldia的两到三倍。与以前在NLP中提出的专利数据集不同,HUPD包含了专利申请的发明人提交的版本(不是授予专利的最终版本),其中允许我们在第一次使用NLP方法进行申请时研究专利性。它在包含丰富的结构化元数据以及专利申请文本的同时也很新颖:通过提供每个应用程序的元数据及其所有文本字段,数据集使研究人员能够执行一组新的NLP任务,以利用结构性协变量的变异。作为有关HUPD的研究类型的案例研究,我们向NLP社区(即专利决策的二元分类)介绍了一项新任务。我们还显示数据集中提供的结构化元数据使我们能够对此任务进行概念转移的明确研究。最后,我们演示了如何将HUPD用于三个其他任务:专利主题领域的多类分类,语言建模和摘要。
translated by 谷歌翻译
学术数据中的引文信息是进入刊物的重要洞察的重要来源和学术话语。引文分析结果和引用的机器学习的适用性严重取决于此类数据的完整性。现在学术数据的一个特定的缺点是非英语出版物通常不包括在数据集中,或者语言元数据不可用。因此,唯一研究了不同语言(交叉引用)的出版物之间的引文仅对非常有限的程度。在本文中,我们对基于超过100万英文论文的交叉引用分析,跨越三个科学学科,三十年的时间跨度。我们的调查涵盖了引用的语言和学科之间的差异,随着时间的推移,趋势以及交叉引用的使用特征以及影响。在我们的研究结果中,引文的增加率为中文所写的出版物,引用主要针对当地非英语语言,以及交叉和单声道引用之间的引文意图的一致性。为了促进进一步的研究,我们会公开收集的数据和源代码。
translated by 谷歌翻译