我们探索无监督模型解释和语法诱导的文本表示的深度聚类。由于这些表示是高维的,因此邮件中的开箱即用的方法不起作用。因此,我们的方法共同将表示转换为较低的聚类友好空间并群集它们。我们考虑了两种语法:在这项工作中的语音感应(POSI)和选区标签(Colab)的一部分。有趣的是,我们发现多语言伯爵(Mbert)包含令人惊讶的英语句法知识;甚至可能和英语伯特(Ebert)一样多。我们的模型可用作无可自由的探针,可使可以是较少偏见的探测方式。我们发现与监督探针相比,无监督探针显示出较高层次的益处。我们进一步注意,我们无监督的探测器使用Ebert和Mbert表示不同,特别是对于POSI。我们通过将其作为无监督的语法感应技术证明其能力来验证探针的功效。通过简单地调整输入表示,我们的探针适用于句法形式主义。我们举报了我们在45标签的英语POSI上探讨的竞争性表现,在10个语言上的12标签POSI上的最先进性能,以及Colab上的竞争结果。我们还对资源贫困语言进行零拍语法归纳,并报告强劲的结果。
translated by 谷歌翻译
以前的语音(POS)归纳模型通常假设某些独立假设(例如,马尔可夫,单向,本地依赖性),这些假设不具有真实语言。例如,主题 - 动词协议可以是长期和双向的。为了促进灵活的依赖性建模,我们提出了一个蒙版的言论部分模型(MPOSM),灵感来自蒙版语言模型(MLM)的最新成功。 MPOSM可以通过掩盖POS重建的目的对任意标签依赖性建模并执行POS归纳。我们在英语Penn WSJ数据集以及包含10种不同语言的通用树库中取得了竞争成果。尽管对长期依赖性进行建模应该理想地有助于这项任务,但我们的消融研究表明,不同语言的趋势不同。为了更好地理解这种现象,我们设计了一个新颖的合成实验,可以专门诊断该模型学习标签一致性的能力。令人惊讶的是,我们发现即使强大的基线也无法在非常简化的设置中始终如一地解决这个问题:相邻单词之间的一致性。尽管如此,MPOSM仍能取得更好的性能。最后,我们进行了详细的错误分析,以阐明其他剩余挑战。我们的代码可从https://github.com/owenzx/mposm获得
translated by 谷歌翻译
Multilingual BERT (mBERT) has demonstrated considerable cross-lingual syntactic ability, whereby it enables effective zero-shot cross-lingual transfer of syntactic knowledge. The transfer is more successful between some languages, but it is not well understood what leads to this variation and whether it fairly reflects difference between languages. In this work, we investigate the distributions of grammatical relations induced from mBERT in the context of 24 typologically different languages. We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms. Such difference learnt via self-supervision plays a crucial role in the zero-shot transfer performance and can be predicted by variation in morphosyntactic properties between languages. These results suggest that mBERT properly encodes languages in a way consistent with linguistic diversity and provide insights into the mechanism of cross-lingual transfer.
translated by 谷歌翻译
在NLP社区中有一个正在进行的辩论,无论现代语言模型是否包含语言知识,通过所谓的探针恢复。在本文中,我们研究了语言知识是否是现代语言模型良好表现的必要条件,我们称之为\ Texit {重新发现假设}。首先,我们展示了语言模型,这是显着压缩的,但在预先磨普目标上表现良好,以便在语言结构探讨时保持良好的分数。这一结果支持重新发现的假设,并导致我们的论文的第二款贡献:一个信息 - 理论框架,与语言建模目标相关。该框架还提供了测量语言信息对字词预测任务的影响的度量标准。我们通过英语综合和真正的NLP任务加固我们的分析结果。
translated by 谷歌翻译
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
translated by 谷歌翻译
Transformer language models (TLMs) are critical for most NLP tasks, but they are difficult to create for low-resource languages because of how much pretraining data they require. In this work, we investigate two techniques for training monolingual TLMs in a low-resource setting: greatly reducing TLM size, and complementing the masked language modeling objective with two linguistically rich supervised tasks (part-of-speech tagging and dependency parsing). Results from 7 diverse languages indicate that our model, MicroBERT, is able to produce marked improvements in downstream task evaluations relative to a typical monolingual TLM pretraining approach. Specifically, we find that monolingual MicroBERT models achieve gains of up to 18% for parser LAS and 11% for NER F1 compared to a multilingual baseline, mBERT, while having less than 1% of its parameter count. We conclude reducing TLM parameter count and using labeled data for pretraining low-resource TLMs can yield large quality benefits and in some cases produce models that outperform multilingual approaches.
translated by 谷歌翻译
尽管在理解深度NLP模型中学到的表示形式以及他们所捕获的知识方面已经做了很多工作,但对单个神经元的关注很少。我们提出了一种称为语言相关性分析的技术,可在任何外部特性中提取模型中的显着神经元 - 目的是了解如何保留这种知识在神经元中。我们进行了细粒度的分析以回答以下问题:(i)我们可以识别网络中捕获特定语言特性的神经元子集吗? (ii)整个网络中的局部或分布式神经元如何? iii)信息保留了多么冗余? iv)针对下游NLP任务的微调预训练模型如何影响学习的语言知识? iv)架构在学习不同的语言特性方面有何不同?我们的数据驱动的定量分析阐明了有趣的发现:(i)我们发现了可以预测不同语言任务的神经元的小亚集,ii)捕获基本的词汇信息(例如后缀),而这些神经元位于较低的大多数层中,iii,iii),而这些神经元,而那些神经元,而那些神经元则可以预测。学习复杂的概念(例如句法角色)主要是在中间和更高层中,iii),在转移学习过程中,显着的语言神经元从较高到较低的层移至较低的层,因为网络保留了较高的层以特定于任务信息,iv)我们发现很有趣在培训预训练模型之间的差异,关于如何保留语言信息,V)我们发现概念在多语言变压器模型中跨不同语言表现出相似的神经元分布。我们的代码作为Neurox工具包的一部分公开可用。
translated by 谷歌翻译
我们提出了一个新颖的框架概念,以分析在预训练的语言模型中学习的潜在概念如何编码。它使用聚类来发现编码的概念,并通过与大量的人类定义概念对齐来解释它们。我们对七个变压器语言模型的分析揭示了有趣的见解:i)学习表示形式中的潜在空间与不同程度的不同语言概念重叠,ii)ii)模型中的下层以词汇概念(例如附加物为附加物)为主,而却是核心语言概念(例如形态学或句法关系)在中间和更高层中更好地表示,iii)一些编码的概念是多方面的,无法使用现有的人类定义的概念来充分解释。
translated by 谷歌翻译
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of seventeen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more taskspecific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.
translated by 谷歌翻译
Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
语言模型是通过有限的输入集定义的,当我们尝试扩展支持语言的数量时,该输入会产生词汇瓶颈。解决此瓶颈会导致在嵌入矩阵中可以表示的与输出层中的计算问题之间的权衡。本文介绍了基于像素的语言编码器Pixel,这两个问题都没有遭受这些问题的影响。 Pixel是一种验证的语言模型,可将文本作为图像呈现,使基于拼字法相似性或像素的共激活的语言传输表示形式。 Pixel经过训练可以重建蒙版贴片的像素,而不是预测令牌上的分布。我们在与BERT相同的英语数据上为8600万参数像素模型预告,并对包括各种非拉丁语脚本在内的类型上多样化的语言中的句法和语义任务进行了评估。我们发现,Pixel在预读取数据中找不到的脚本上的句法和语义处理任务大大优于BERT,但是在使用拉丁文脚本时,Pixel比BERT稍弱。此外,我们发现像素对嘈杂的文本输入比bert更强大,进一步证实了用像素建模语言的好处。
translated by 谷歌翻译
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.
translated by 谷歌翻译
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.
translated by 谷歌翻译
虚拟对手培训(增值税)在计算机视觉和NLP任务的监督和半监控设置下有效地学习强大的模型。但是,尚未探讨VAT对多语言和多议书文本分类的疗效。在这项工作中,我们探索了Mullilabel情感认知的增值税,重点是利用不同语言的未标记数据来提高模型性能。我们在Semeval2018 Multilabel和多语言情感识别数据集上进行广泛的半监督实验,并显示6.2%(阿拉伯语),3.8%(西班牙语)和1.8%(英语)的性能增益,通过监督的学习,具有相同数量的标记数据(10%培训数据)。我们还将现有的现有最先进7%,4.5%和1%(Jaccard指数)分别用于西班牙语,阿拉伯语和英语,并对了解不同层面模型的影响的探讨实验。
translated by 谷歌翻译
通常认为语言模型能够编码语法[Tenney等,2019; Jawahar等,2019; Hewitt和Manning,2019]。在本文中,我们提出了UPOA,这是一种无监督的组成分析模型,该模型仅基于以验证的语言模型学习为跨度分割的句法距离,仅基于自我发挥的权重矩阵来计算出OUT关联得分。我们进一步提出了一个增强的版本UPIO,该版本利用了内部关联和外部关联得分来估计跨度的可能性。使用UPOA和UPIO的实验揭示了自我注意机制中查询和密钥的线性投影矩阵在解析中起重要作用。因此,我们将无监督的模型扩展到了几个射击模型(FPOA,FPIO),这些模型使用一些注释的树来学习更好的线性投影矩阵进行解析。宾夕法尼亚河岸上的实验表明,我们的无监督解析模型UPIO实现了与短句子(长度<= 10)相当的结果。我们的几个解析模型FPIO接受了仅20棵带注释的树木的训练,优于前几种镜头解析方法,该方法接受了50棵带注释的树木的训练。交叉解析的实验表明,无监督和少数解析方法都比SPMRL大多数语言的先前方法都更好[Seddah等,2013]。
translated by 谷歌翻译
We propose a transition-based approach that, by training a single model, can efficiently parse any input sentence with both constituent and dependency trees, supporting both continuous/projective and discontinuous/non-projective syntactic structures. To that end, we develop a Pointer Network architecture with two separate task-specific decoders and a common encoder, and follow a multitask learning strategy to jointly train them. The resulting quadratic system, not only becomes the first parser that can jointly produce both unrestricted constituent and dependency trees from a single model, but also proves that both syntactic formalisms can benefit from each other during training, achieving state-of-the-art accuracies in several widely-used benchmarks such as the continuous English and Chinese Penn Treebanks, as well as the discontinuous German NEGRA and TIGER datasets.
translated by 谷歌翻译
数据饥饿的深度神经网络已经将自己作为许多NLP任务的标准建立为包括传统序列标记的标准。尽管他们在高资源语言上表现最先进的表现,但它们仍然落后于低资源场景的统计计数器。一个方法来反击攻击此问题是文本增强,即,从现有数据生成新的合成训练数据点。虽然NLP最近目睹了一种文本增强技术的负载,但该领域仍然缺乏对多种语言和序列标记任务的系统性能分析。为了填补这一差距,我们调查了三类文本增强方法,其在语法(例如,裁剪子句子),令牌(例如,随机字插入)和字符(例如,字符交换)级别上执行更改。我们系统地将它们与语音标记,依赖解析和语义角色标记的分组进行了比较,用于使用各种模型的各种语言系列,包括依赖于诸如MBERT的普赖金的多语言语境化语言模型的架构。增强最显着改善了解析,然后是语音标记和语义角色标记的依赖性解析。我们发现实验技术通常在形态上丰富的语言,而不是越南语等分析语言。我们的研究结果表明,增强技术可以进一步改善基于MBERT的强基线。我们将字符级方法标识为最常见的表演者,而同义词替换和语法增强仪提供不一致的改进。最后,我们讨论了最大依赖于任务,语言对和模型类型的结果。
translated by 谷歌翻译
经过审计的多语言模型已成为将NLP功能转移到低资源语言的常见工具,通常具有适应性。在这项工作中,我们研究了两种改编的性能,可扩展性和相互作用:词汇增强和脚本音译。我们对九种多样化的低资源语言中的词性标签,普遍依赖解析的评估,并命名为实体识别,以维护这些方法的可行性,同时围绕如何最佳地将多语言模型适应低资源设置的新问题。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译