我们建议承担义义歧义(WSD)的问题。在语言中,相同形式的单词可能取决于上下文。虽然人类可以通过他们的上下文轻松推断出这些单词的含义或光泽,但机器偶然地推断出这个任务。我们打算在黄等人的结果上复制和扩展他们设计消除这些词语的模型(Huang等人。,2019)。具体来说,我们提出了以下增强:数据集调整(Alpha Hyper-参数),集合方法,用BART和Albert更换BERT。以下GitHub存储库包含本报告中使用的所有代码,它延伸到Huang等人提供的代码。
translated by 谷歌翻译
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show-both theoretically and empirically-that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available. 1 2 We randomly sample 10 6 sentences from English Wikipedia and fine-tune BERTbase with learning rate = 3e-5, N = 64. In all our experiments, no STS training sets are used.
translated by 谷歌翻译
定义生成任务旨在自动在特定上下文中生成一个单词的定义。但是,由于缺乏针对不同复杂性的数据集,模型产生的定义往往会保持相同的复杂度。本文提出了为具有可控复杂性级别的单词生成定义的新任务。相应地,我们介绍了编译,一个数据集给出了有关中国定义的详细信息,并且每个定义都标有其复杂性级别。编译数据集包括74,303个单词和106,882个定义。据我们所知,它是中国定义生成任务的最大数据集。我们选择各种代表性生成方法作为此任务的基准和进行评估,这说明我们的数据集在协助模型生成不同的复杂性级别定义方面发挥了出色的作用。我们认为,编译数据集将使复杂性可控定义生成的进一步研究受益。
translated by 谷歌翻译
惯用表达式(IES)在自然语言中起重要作用。在本文中,我们研究了惯用句子解释(ISP)的任务,旨在通过用IE用文字解释来解释一个句子。缺乏与惯用语文平行句子的大型语料库是这项任务的主要挑战,我们考虑了两个单独的解决方案。首先,我们向ISP提出了一个无人监督的方法,它利用IE的上下文信息和定义,不需要并行句子训练集。其次,我们提出了一种弱监督的方法,使用后翻来的方法与IE共同执行释义和生成句子,以扩大小规模并行句子训练数据集。该研究的其他重要衍生物包括一种模型,该模型将句子中的文字短语替换为一种与IE生成惯用表达式和具有惯用/文字句对的大规模并行数据集。拟议的解决方案与竞争性基线相比的有效性在Bleu超过5.16点的相对增益中观察到超过8.75点,在使用自动和手动的并行数据集上经验上验证生成的句子时,Sari超过19.57点评估。我们展示了ISP作为EN-DE机器翻译中的预处理步骤的实用实用性。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) aims at predicting the relation between a pair of sentences (premise and hypothesis) as entailment, contradiction or semantic independence. Although deep learning models have shown promising performance for NLI in recent years, they rely on large scale expensive human-annotated datasets. Semi-supervised learning (SSL) is a popular technique for reducing the reliance on human annotation by leveraging unlabeled data for training. However, despite its substantial success on single sentence classification tasks where the challenge in making use of unlabeled data is to assign "good enough" pseudo-labels, for NLI tasks, the nature of unlabeled data is more complex: one of the sentences in the pair (usually the hypothesis) along with the class label are missing from the data and require human annotations, which makes SSL for NLI more challenging. In this paper, we propose a novel way to incorporate unlabeled data in SSL for NLI where we use a conditional language model, BART to generate the hypotheses for the unlabeled sentences (used as premises). Our experiments show that our SSL framework successfully exploits unlabeled data and substantially improves the performance of four NLI datasets in low-resource settings. We release our code at: https://github.com/msadat3/SSL_for_NLI.
translated by 谷歌翻译
自动语音识别(ASR)中编辑的后编辑需要自动纠正ASR系统产生的常见和系统错误。 ASR系统的输出在很大程度上容易出现语音和拼写错误。在本文中,我们建议使用强大的预训练的序列模型BART,BART进一步适应训练以作为剥夺模型,以纠正此类类型的错误。自适应培训是在通过合成诱导错误以及通过合并现有ASR系统中的实际错误获得的增强数据集上执行的。我们还提出了一种简单的方法,可以使用单词级别对齐来恢复输出。对重音语音数据的实验结果表明,我们的策略有效地纠正了大量的ASR错误,并在与竞争性基线相比时会产生改善的结果。我们还强调了在印地语语言中相关的语法误差校正任务中获得的负面结果,显示了通过我们建议的模型捕获更广泛上下文的限制。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
对比学习一直吸引着学习无监督的句子嵌入。当前的最新无监督方法是无监督的SIMCSE(UNSUP-SIMCSE)。 Unsup-Simcse将辍学作为最小数据增强方法,并将相同的输入句子传递给预训练的变压器编码器(带有掉落的掉落)两次,以获取两个相应的嵌入式以构建正对。由于句子的长度信息通常会由于使用嵌入变压器中的位置嵌入而编码到句子嵌入中,因此Unsup-Simcse中的每个正对实际上包含相同的长度信息。因此,接受这些正面对训练的Unsup-Simcse可能是有偏见的,这往往会考虑到语义上相同长度或相似长度的句子更相似。通过统计观察,我们发现Unsup-Simcse确实存在这样的问题。为了减轻它,我们应用了一个简单的重复操作来修改输入句子,然后分别将输入句子及其修改后的对应物传递给预训练的变压器编码器,以获取阳性对。此外,我们从计算机视觉社区中汲取灵感,并引入动量对比度,从而扩大了负面对的数量,而没有其他计算。提出的两种修改分别应用于正和负对,并构建一种新的句子嵌入方法,称为增强的Unsup-Simcse(ESIMCSE)。我们在几个基准数据集W.R.T上评估了所提出的ESIMCSE,语义文本相似性(STS)任务。实验结果表明,ESIMCSE的表现优于最先进的undup-Simcse,而Bert基碱的平均长矛相关性为2.02%。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
本文介绍了关于剧透筛选的研究。在这种用例中,我们描述了微调和组织基于文本的模型任务的方法,并具有最新的深度学习成果和技术来解释模型的结果。到目前为止,文献中的剧透研究很少描述。我们在带有带注释的扰流板(ROC AUC以上的TV Tropes Point DataSet上超过81 \%的Roc Auc以上的Roc Auc上超过81 \%)的转移学习方法和不同的最新变压器架构。我们还收集了数据并使用细粒度注释组装了新数据集。为此,我们采用了可解释技术和措施来评估模型的可靠性并解释其结果。
translated by 谷歌翻译
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context. Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient. We present LM-BFF-better few-shot fine-tuning of language models 1 -a suite of simple and complementary techniques for finetuning language models on a small number of annotated examples. Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context. Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression. Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks. Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning. 2 * The first two authors contributed equally. 1 Alternatively, language models' best friends forever. 2 Our implementation is publicly available at https:// github.com/princeton-nlp/LM-BFF.
translated by 谷歌翻译
对于自然语言处理应用可能是有问题的,因为它们的含义不能从其构成词语推断出来。缺乏成功的方法方法和足够大的数据集防止了用于检测成语的机器学习方法的开发,特别是对于在训练集中不发生的表达式。我们提出了一种叫做小鼠的方法,它使用上下文嵌入来实现此目的。我们展示了一个新的多字表达式数据集,具有文字和惯用含义,并使用它根据两个最先进的上下文单词嵌入式培训分类器:Elmo和Bert。我们表明,使用两个嵌入式的深度神经网络比现有方法更好地执行,并且能够检测惯用词使用,即使对于训练集中不存在的表达式。我们展示了开发模型的交叉传输,并分析了所需数据集的大小。
translated by 谷歌翻译
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by ( 1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new stateof-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.
translated by 谷歌翻译
Current state-of-the-art approaches to text classification typically leverage BERT-style Transformer models with a softmax classifier, jointly fine-tuned to predict class labels of a target task. In this paper, we instead propose an alternative training objective in which we learn task-specific embeddings of text: our proposed objective learns embeddings such that all texts that share the same target class label should be close together in the embedding space, while all others should be far apart. This allows us to replace the softmax classifier with a more interpretable k-nearest-neighbor classification approach. In a series of experiments, we show that this yields a number of interesting benefits: (1) The resulting order induced by distances in the embedding space can be used to directly explain classification decisions. (2) This facilitates qualitative inspection of the training data, helping us to better understand the problem space and identify labelling quality issues. (3) The learned distances to some degree generalize to unseen classes, allowing us to incrementally add new classes without retraining the model. We present extensive experiments which show that the benefits of ante-hoc explainability and incremental learning come at no cost in overall classification accuracy, thus pointing to practical applicability of our proposed approach.
translated by 谷歌翻译
诸如学术文章和商业报告之类的长期文件一直是详细说明重要问题和需要额外关注的复杂主题的标准格式。自动汇总系统可以有效地将长文档置于简短而简洁的文本中,以封装最重要的信息,从而在帮助读者的理解中很重要。最近,随着神经体系结构的出现,已经做出了重大的研究工作,以推动自动文本摘要系统,以及有关将这些系统扩展到长期文档领域的挑战的大量研究。在这项调查中,我们提供了有关长期文档摘要的研究的全面概述,以及其研究环境的三个主要组成部分的系统评估:基准数据集,汇总模型和评估指标。对于每个组成部分,我们在长期汇总的背景下组织文献,并进行经验分析,以扩大有关当前研究进度的观点。实证分析包括一项研究基准数据集的内在特征,摘要模型的多维分析以及摘要评估指标的综述。根据总体发现,我们通过提出可能在这个快速增长的领域中提出未来探索的方向来得出结论。
translated by 谷歌翻译
数据增强是通过转换为机器学习的人工创建数据的人工创建,是一个跨机器学习学科的研究领域。尽管它对于增加模型的概括功能很有用,但它还可以解决许多其他挑战和问题,从克服有限的培训数据到正规化目标到限制用于保护隐私的数据的数量。基于对数据扩展的目标和应用的精确描述以及现有作品的分类法,该调查涉及用于文本分类的数据增强方法,并旨在为研究人员和从业者提供简洁而全面的概述。我们将100多种方法划分为12种不同的分组,并提供最先进的参考文献来阐述哪种方法可以通过将它们相互关联,从而阐述了哪种方法。最后,提供可能构成未来工作的基础的研究观点。
translated by 谷歌翻译
近年来,基于变压器的预训练模型已获得了很大的进步,成为自然语言处理中最重要的骨干之一。最近的工作表明,变压器内部的注意力机制可能不需要,卷积神经网络和基于多层感知器的模型也已被研究为变压器替代方案。在本文中,我们考虑了一个用于语言模型预训练的图形循环网络,该网络通过本地令牌级通信为每个序列构建一个图形结构,以及与其他代币解耦的句子级表示。原始模型在受监督培训下的特定领域特定文本分类中表现良好,但是,其通过自我监督的方式学习转移知识的潜力尚未得到充分利用。我们通过优化体系结构并验证其在更通用的语言理解任务(英语和中文)中的有效性来填补这一空白。至于模型效率,我们的模型在基于变压器的模型中而不是二次复杂性,而是具有线性复杂性,并且在推断过程中的性能更有效。此外,我们发现与现有基于注意力的模型相比,我们的模型可以生成更多样化的输出,而背景化的功能冗余性较小。
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
定义对于术语理解至关重要。最近,对自动提取和生成术语的定义越来越令人兴趣。但是,此任务的现有方法是提取或抽象 - 定义是从语料库中提取或由语言生成模型生成的。在本文中,我们建议将提取和发电用于定义建模:首先提取来自网络的目标术语的自我和相关的定义信息,然后通过结合提取的定义信息来生成最终定义。实验表明我们的框架可以为明显建模的定义建模来产生高质量的技术术语和最先进的模型。
translated by 谷歌翻译