许多下游应用程序正在使用依赖性树,因此依赖于产生正确的依赖解析器,或者至少一致地输出。然而,使用机器学习训练依赖解析器,因此由于训练数据中的偏差而感到易受对不一致的不一致性。本文探讨了这种偏见的效果四种语言 - 英语,瑞典语,俄语和乌克兰语 - 尽管我们研究了在句子中替换数字的效果的实验。我们表明,此类看似微不足道的变化可能会导致输出差异,并表明数据增强可以弥补问题。
translated by 谷歌翻译
数据饥饿的深度神经网络已经将自己作为许多NLP任务的标准建立为包括传统序列标记的标准。尽管他们在高资源语言上表现最先进的表现,但它们仍然落后于低资源场景的统计计数器。一个方法来反击攻击此问题是文本增强,即,从现有数据生成新的合成训练数据点。虽然NLP最近目睹了一种文本增强技术的负载,但该领域仍然缺乏对多种语言和序列标记任务的系统性能分析。为了填补这一差距,我们调查了三类文本增强方法,其在语法(例如,裁剪子句子),令牌(例如,随机字插入)和字符(例如,字符交换)级别上执行更改。我们系统地将它们与语音标记,依赖解析和语义角色标记的分组进行了比较,用于使用各种模型的各种语言系列,包括依赖于诸如MBERT的普赖金的多语言语境化语言模型的架构。增强最显着改善了解析,然后是语音标记和语义角色标记的依赖性解析。我们发现实验技术通常在形态上丰富的语言,而不是越南语等分析语言。我们的研究结果表明,增强技术可以进一步改善基于MBERT的强基线。我们将字符级方法标识为最常见的表演者,而同义词替换和语法增强仪提供不一致的改进。最后,我们讨论了最大依赖于任务,语言对和模型类型的结果。
translated by 谷歌翻译
This paper presents an evaluation of the quality of automatically generated reading comprehension questions from Swedish text, using the Quinductor method. This method is a light-weight, data-driven but non-neural method for automatic question generation (QG). The evaluation shows that Quinductor is a viable QG method that can provide a strong baseline for neural-network-based QG methods.
translated by 谷歌翻译
我们通过引入一个评估训练和测试数据中看到的边缘位移分布(边缘的定向距离)之间的差异来为NLP中解析性能的讨论做出贡献。我们假设该测量将与跨树库的解析性能中观察到的差异有关。我们通过建立先前的工作来激发这种激励,然后尝试通过使用多种统计方法来伪造这一假设。我们确定即使控制潜在的协变量,这种测量和解析性能之间也存在统计相关性。然后,我们使用它来建立一种抽样技术,从而为我们提供对抗性和互补的分裂。这给出了给定树库来代替新鲜采样数据的解析系统的下层和上限。从广义上讲,这里提出的方法可以作为NLP中基于相关的探索工作的参考。
translated by 谷歌翻译
尚未详细探讨Treebank的选择,用于解析评估和可能由偏见的选择产生的虚假效果。本文研究了对树岸的单个子集的评估如何导致结论较弱。首先,我们采用一些对比的解析器,并将其运行在先前工作中提出的树库的子集上,其使用(或不使用)在类型学或数据稀缺等标准上是合理的(或不合理的)。其次,我们运行了该实验的大规模版本,创建大量的Treebanks随机子集,并在其上比较许多分数可用的解析器。结果表明,各个子集的差异很大,尽管建立良好的树牛银行选择准则很难,但仍有可能检测潜在的有害策略。
translated by 谷歌翻译
We propose a transition-based approach that, by training a single model, can efficiently parse any input sentence with both constituent and dependency trees, supporting both continuous/projective and discontinuous/non-projective syntactic structures. To that end, we develop a Pointer Network architecture with two separate task-specific decoders and a common encoder, and follow a multitask learning strategy to jointly train them. The resulting quadratic system, not only becomes the first parser that can jointly produce both unrestricted constituent and dependency trees from a single model, but also proves that both syntactic formalisms can benefit from each other during training, achieving state-of-the-art accuracies in several widely-used benchmarks such as the continuous English and Chinese Penn Treebanks, as well as the discontinuous German NEGRA and TIGER datasets.
translated by 谷歌翻译
对任何人类语言的文本的语法分析通常涉及许多基本的处理任务,例如令牌化,形态标记和依赖性解析。最先进的系统可以在具有大数据集的语言上实现这些任务的高精度,但是对于几乎没有带注释的数据的他的他加禄语等语言的结果很差。为了解决他加禄语语言的此问题,我们研究了在没有带注释的他加禄语数据的情况下使用辅助数据源来创建特定于任务模型的使用。我们还探索了单词嵌入和数据扩展的使用,以提高性能,而只有少量带注释的他加禄语数据可用。我们表明,与最先进的监督基线相比,这些零射击和几乎没有射击的方法在对域内和域外的塔加尔teact文本进行了语法分析方面进行了实质性改进。
translated by 谷歌翻译
The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.
translated by 谷歌翻译
在这项研究中,我们提出了一种基于词素的方案,用于韩国依赖解析,并采用拟议方案来普遍依赖。我们介绍了语言原理,该基本原理说明了采用基于词素的格式的动机和必要性,并开发了脚本,这些脚本会在通用依赖项使用的原始格式和所提出的基于词素的格式自动之间转换。然后,统计和神经模型(包括udpipe和stanza)证明了提出的格式对韩国依赖解析的有效性,并以我们精心构造的基于词素的单词嵌入韩语。Morphud的表现优于所有韩国UD Treebanks的解析结果,我们还提供了详细的错误分析。
translated by 谷歌翻译
Transformer language models (TLMs) are critical for most NLP tasks, but they are difficult to create for low-resource languages because of how much pretraining data they require. In this work, we investigate two techniques for training monolingual TLMs in a low-resource setting: greatly reducing TLM size, and complementing the masked language modeling objective with two linguistically rich supervised tasks (part-of-speech tagging and dependency parsing). Results from 7 diverse languages indicate that our model, MicroBERT, is able to produce marked improvements in downstream task evaluations relative to a typical monolingual TLM pretraining approach. Specifically, we find that monolingual MicroBERT models achieve gains of up to 18% for parser LAS and 11% for NER F1 compared to a multilingual baseline, mBERT, while having less than 1% of its parameter count. We conclude reducing TLM parameter count and using labeled data for pretraining low-resource TLMs can yield large quality benefits and in some cases produce models that outperform multilingual approaches.
translated by 谷歌翻译
Syntax is a latent hierarchical structure which underpins the robust and compositional nature of human language. An active line of inquiry is whether large pretrained language models (LLMs) are able to acquire syntax by training on text alone; understanding a model's syntactic capabilities is essential to understanding how it processes and makes use of language. In this paper, we propose a new method, SSUD, which allows for the induction of syntactic structures without supervision from gold-standard parses. Instead, we seek to define formalism-agnostic, model-intrinsic syntactic parses by using a property of syntactic relations: syntactic substitutability. We demonstrate both quantitative and qualitative gains on dependency parsing tasks using SSUD, and induce syntactic structures which we hope provide clarity into LLMs and linguistic representations, alike.
translated by 谷歌翻译
我用Hunglish2语料库训练神经电脑翻译任务的模型。这项工作的主要贡献在培训NMT模型期间评估不同的数据增强方法。我提出了5种不同的增强方法,这些方法是结构感知的,这意味着而不是随机选择用于消隐或替换的单词,句子的依赖树用作增强的基础。我首先关于神经网络的详细文献综述,顺序建模,神经机翻译,依赖解析和数据增强。经过详细的探索性数据分析和Hunglish2语料库的预处理之后,我使用所提出的数据增强技术进行实验。匈牙利语的最佳型号达到了33.9的BLEU得分,而英国匈牙利最好的模型达到了28.6的BLEU得分。
translated by 谷歌翻译
经过审计的多语言模型已成为将NLP功能转移到低资源语言的常见工具,通常具有适应性。在这项工作中,我们研究了两种改编的性能,可扩展性和相互作用:词汇增强和脚本音译。我们对九种多样化的低资源语言中的词性标签,普遍依赖解析的评估,并命名为实体识别,以维护这些方法的可行性,同时围绕如何最佳地将多语言模型适应低资源设置的新问题。
translated by 谷歌翻译
我们提出了Rudsi,这是俄罗斯语言感官诱导(WSI)的新基准。该数据集是使用单词用法图(WUGS)的手动注释和半自动聚类创建的。与俄罗斯的先前WSI数据集不同,Rudsi完全由数据驱动(基于俄罗斯国家语料库的文本),没有对注释者强加的外部词感官。根据图聚类的参数,可以从原始注释中产生不同的导数数据集。我们报告了几种基线WSI方法在Rudsi上获得的性能,并讨论了改善这些分数的可能性。
translated by 谷歌翻译
虽然有几种可用于匈牙利语的源语言处理管道,但它们都不满足当今NLP应用程序的要求。语言处理管道应由接近最先进的lemmatization,形态学分析,实体识别和单词嵌入。工业文本处理应用程序必须满足非功能性的软件质量要求,更重要的是,支持多种语言的框架越来越受青睐。本文介绍了哈普西,匈牙利匈牙利语言处理管道。呈现的工具为最重要的基本语言分析任务提供组件。它是开源,可在许可证下提供。我们的系统建立在Spacy的NLP组件之上,这意味着它快速,具有丰富的NLP应用程序和扩展生态系统,具有广泛的文档和众所周知的API。除了底层模型的概述外,我们还对共同的基准数据集呈现严格的评估。我们的实验证实,母鹿在所有子组织中具有高精度,同时保持资源有效的预测能力。
translated by 谷歌翻译
最近的神经监督主题细分模型具有优于无监督方法的杰出有效性,并从Wikipedia采样了大规模培训语料库。但是,这些模型可能会因利用简单的语言线索进行预测而引起的鲁棒性和可传递性有限,但忽略了更重要的索引间局部一致性。为了解决这个问题,我们提出了一种语言意识到的神经主题细分模型,并注入了句子上的话语依赖性结构,以鼓励模型使主题边界预测更多地基于句子之间的局部一致性。我们对英语评估数据集的实证研究表明,通过我们提出的策略将上述句子话语结构注入神经主题分段者可以实质上改善其在域内和外域数据上的性能,而模型的复杂性很小。
translated by 谷歌翻译
本文概述了与CRAC 2022研讨会相关的多语言核心分辨率的共享任务。共同的任务参与者应该开发能够识别提及并根据身份核心重点聚集的训练系统。Corefud 1.0的公共版本包含10种语言的13个数据集,被用作培训和评估数据的来源。先前面向核心共享任务中使用的串联分数用作主要评估度量。5个参与团队提交了8个核心预测系统;此外,组织者在共享任务开始时提供了一个基于竞争变压器的基线系统。获胜者系统的表现优于基线12个百分点(就所有语言的所有数据集而言,在所有数据集中平均得分)。
translated by 谷歌翻译
In order to achieve deep natural language understanding, syntactic constituent parsing is a vital step, highly demanded by many artificial intelligence systems to process both text and speech. One of the most recent proposals is the use of standard sequence-to-sequence models to perform constituent parsing as a machine translation task, instead of applying task-specific parsers. While they show a competitive performance, these text-to-parse transducers are still lagging behind classic techniques in terms of accuracy, coverage and speed. To close the gap, we here extend the framework of sequence-to-sequence models for constituent parsing, not only by providing a more powerful neural architecture for improving their performance, but also by enlarging their coverage to handle the most complex syntactic phenomena: discontinuous structures. To that end, we design several novel linearizations that can fully produce discontinuities and, for the first time, we test a sequence-to-sequence model on the main discontinuous benchmarks, obtaining competitive results on par with task-specific discontinuous constituent parsers and achieving state-of-the-art scores on the (discontinuous) English Penn Treebank.
translated by 谷歌翻译
Most computational models of dependency syntax consist of distributions over spanning trees. However, the majority of dependency treebanks require that every valid dependency tree has a single edge coming out of the ROOT node, a constraint that is not part of the definition of spanning trees. For this reason all standard inference algorithms for spanning trees are suboptimal for inference over dependency trees. Zmigrod et al. (2021b) proposed algorithms for sampling with and without replacement from the dependency tree distribution that incorporate the single-root constraint. In this paper we show that their fastest algorithm for sampling with replacement, Wilson-RC, is in fact producing biased samples and we provide two alternatives that are unbiased. Additionally, we propose two algorithms (one incremental, one parallel) that reduce the asymptotic runtime of algorithm for sampling k trees without replacement to O(kn3). These algorithms are both asymptotically and practically more efficient.
translated by 谷歌翻译
尽管对抽象中的英语句子进行了广泛的研究,但是通过自动度量标准与金图相比,它与金图类进行了比较,但是统一图表表示的全文解析缺乏定义明确的表示和评估。利用以前的工作中的超级信托级别注释,我们介绍了一种用于导出统一图形表示的简单算法,避免了从合并不合并和缺乏连贯性信息丢失的陷阱。接下来,我们描述了对Swatch度量标准的改进,使其易于进行比较文档级图形,并使用它重新评估最佳已发布的文档级AMR解析器。我们还提出了一种与COREREFER解决系统的顶部组合的管道方法,为未来的研究提供了强大的基线。
translated by 谷歌翻译