广义文本表示是许多自然语言理解任务的基础。要充分利用不同的语料库,不可避免地需要了解它们之间的相关性。但是,许多方法忽略了相关性,并直接用于所有任务的单通道模型(粗糙的范式),这缺乏足够的理性和解释。此外,一些现有的作品通过针迹技能块(一个精细的范式)学习下游任务,这可能会导致其冗余和噪音,从而导致非理性。在这项工作中,我们首先通过三种不同的观点分析任务相关性,即数据属性,手动设计和基于模型的相关性,基于相似的任务被分组在一起。然后,我们提出了一个用粗到细范式的层次结构框架,其最底层共享了所有任务,中层级别分为不同的组,以及分配给每个任务的顶级级别。这使我们的模型可以从所有任务中学习基本的语言属性,提高相关任务的性能,并减少不相关任务的负面影响。我们在五个自然语言理解任务的13个基准数据集上进行的实验证明了我们方法的优势。
translated by 谷歌翻译
近年来,在应用预训练的语言模型(例如Bert)上,取得了巨大进展,以获取信息检索(IR)任务。在网页中通常使用的超链接已被利用用于设计预训练目标。例如,超链接的锚文本已用于模拟查询,从而构建了巨大的查询文档对以进行预训练。但是,作为跨越两个网页的桥梁,尚未完全探索超链接的潜力。在这项工作中,我们专注于建模通过超链接连接的两个文档之间的关系,并为临时检索设计一个新的预训练目标。具体而言,我们将文档之间的关系分为四组:无链接,单向链接,对称链接和最相关的对称链接。通过比较从相邻组采样的两个文档,该模型可以逐渐提高其捕获匹配信号的能力。我们提出了一个渐进的超链接预测({php})框架,以探索预训练中超链接的利用。对两个大规模临时检索数据集和六个提问数据集的实验结果证明了其优于现有的预训练方法。
translated by 谷歌翻译
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets. 1
translated by 谷歌翻译
预先接受的语言模型实现了最先进的导致各种自然语言处理(NLP)任务。 GPT-3表明,缩放预先训练的语言模型可以进一步利用它们的巨大潜力。最近提出了一个名为Ernie 3.0的统一框架,以预先培训大型知识增强型号,并培训了具有10亿参数的模型。 Ernie 3.0在各种NLP任务上表现出最先进的模型。为了探讨缩放的表现,我们培养了百卢比的3.0泰坦参数型号,在PaddlePaddle平台上有高达260亿参数的泰坦。此外,我们设计了一种自我监督的对抗性损失和可控语言建模损失,以使ERNIE 3.0 TITAN产生可信和可控的文本。为了减少计算开销和碳排放,我们向Ernie 3.0泰坦提出了一个在线蒸馏框架,教师模型将同时教授学生和培训。埃塞尼3.0泰坦是迄今为止最大的中国密集预训练模型。经验结果表明,Ernie 3.0泰坦在68个NLP数据集中优于最先进的模型。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
文本匹配是信息检索和自然语言处理的基本技术。文本匹配任务共享确定两个给定文本之间关系的相同范例。这些关系因任务而异,例如〜在文档检索中相关性,释义识别中的语义一致性和所回答的可回答判断。但是,文本匹配的基本信号保留在有限范围中,即〜精确匹配,语义匹配和推理匹配。理想情况下,良好的文本匹配模型可以学会捕获和汇总这些信号,以实现不同的匹配任务以实现竞争性能,而最近的最新文本匹配模型,例如〜预训练的语言模型(PLM)很难概括。这是因为在特定于任务的数据集上的端到端监督学习使模型过分强调了数据示例偏置和特定于任务的信号,而不是基本的匹配信号。为了克服这个问题,我们采用了专业化的将军培训策略,并将其称为比赛推出。在专业阶段,对不同匹配任务的描述映射到一些提示令牌。在概括阶段,匹配模型通过接受各种匹配任务的培训来探索基本匹配信号。高不同的匹配任务避免了模型拟合特定任务的数据偏差,因此该模型可以专注于学习基本匹配信号。同时,在第一步中获得的提示令牌有助于模型区分不同的特定任务匹配信号。公共数据集上的实验结果表明,匹配点可以提高PLM在文本匹配中的多任务概括能力,并产生更好的内域多任务,外域多任务和新任务适应性性能由以前的微调范式训练的特定于任务模型。
translated by 谷歌翻译
Dense retrieval aims to map queries and passages into low-dimensional vector space for efficient similarity measuring, showing promising effectiveness in various large-scale retrieval tasks. Since most existing methods commonly adopt pre-trained Transformers (e.g. BERT) for parameter initialization, some work focuses on proposing new pre-training tasks for compressing the useful semantic information from passages into dense vectors, achieving remarkable performances. However, it is still challenging to effectively capture the rich semantic information and relations about passages into the dense vectors via one single particular pre-training task. In this work, we propose a multi-task pre-trained model, MASTER, that unifies and integrates multiple pre-training tasks with different learning objectives under the bottlenecked masked autoencoder architecture. Concretely, MASTER utilizes a multi-decoder architecture to integrate three types of pre-training tasks: corrupted passages recovering, related passage recovering and PLMs outputs recovering. By incorporating a shared deep encoder, we construct a representation bottleneck in our architecture, compressing the abundant semantic information across tasks into dense vectors. The first two types of tasks concentrate on capturing the semantic information of passages and relationships among them within the pre-training corpus. The third one can capture the knowledge beyond the corpus from external PLMs (e.g. GPT-2). Extensive experiments on several large-scale passage retrieval datasets have shown that our approach outperforms the previous state-of-the-art dense retrieval methods. Our code and data are publicly released in https://github.com/microsoft/SimXNS
translated by 谷歌翻译
Using a single model across various tasks is beneficial for training and applying deep neural sequence models. We address the problem of developing generalist representations of text that can be used to perform a range of different tasks rather than being specialised to a single application. We focus on processing short questions and developing an embedding for these questions that is useful on a diverse set of problems, such as question topic classification, equivalent question recognition, and question answering. This paper introduces QBERT, a generalist model for processing questions. With QBERT, we demonstrate how we can train a multi-task network that performs all question-related tasks and has achieved similar performance compared to its corresponding single-task models.
translated by 谷歌翻译
This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-ofthe-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. * Equal contribution. † Contact person.
translated by 谷歌翻译
尽管现有的机器阅读理解模型在许多数据集上取得了迅速的进展,但它们远非强劲。在本文中,我们提出了一个面向理解的机器阅读理解模型,以解决三种鲁棒性问题,这些问题过于敏感,稳定性和泛化。具体而言,我们首先使用自然语言推理模块来帮助模型了解输入问题的准确语义含义,以解决过度敏感性和稳定性的问题。然后,在机器阅读理解模块中,我们提出了一种记忆引导的多头注意方法,该方法可以进一步很好地理解输入问题和段落的语义含义。第三,我们提出了一种多语言学习机制来解决概括问题。最后,这些模块与基于多任务学习的方法集成在一起。我们在三个旨在衡量模型稳健性的基准数据集上评估了我们的模型,包括Dureader(健壮)和两个与小队相关的数据集。广泛的实验表明,我们的模型可以很好地解决上述三种鲁棒性问题。而且,即使在某些极端和不公平的评估下,它也比所有这些数据集中所有这些数据集的最先进模型的结果要好得多。我们工作的源代码可在以下网址获得:https://github.com/neukg/robustmrc。
translated by 谷歌翻译
尽管最近的多任务学习和自然语言处理的转移学习成功(NLP),但很少有效地研究了在训练中缩放任务数量的效果。迈出了这一目标,介绍了Exmix(极端混合物):跨越各个领域和任务家庭的大规模收集107个监督的NLP任务。使用EXMIX,我们研究了最大规模的多任务预培训的影响,并分析了普通任务家庭之间的共同培训转移。通过此分析,我们表明手动策划用于多任务预训练的理想任务,并不简单,而且多任务缩放可以自行改进模型。最后,我们提出了Ext5:使用自我监督跨度去噪和监督EXMIX的多任务目标预先训练的模型。通过广泛的实验,我们表明Ext5优于超级格,宝石,彩虹,封闭书QA任务的强大T5基线,以及Exmix之外的几个任务。 Ext5在预训练时也显着提高了样品效率。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
我们提供了从文本到文本变换器(T5)的第一次探索句子嵌入式。句子嵌入式广泛适用于语言处理任务。虽然T5在作为序列到序列映射问题的语言任务上实现令人印象深刻的性能,但目前尚不清楚如何从编码器解码器模型生成陈列嵌入的句子。我们调查三种方法提取T5句子嵌入方法:两个仅利用T5编码器,一个使用全T5编码器解码器模型。为了支持我们的调查,我们建立了一个新的句子代表转移基准,SentGlue,它将Senteval Toolkit扩展到粘合基准的九个任务。我们的编码器的型号优于Senteval和SentGlue传输任务的句子 - BERT和SIMCSE句子嵌入,包括语义文本相似性(STS)。发现从数百万到数十亿参数的缩放T5产生一致的进一步改进。最后,我们的编码器 - 解码器方法在使用句子嵌入时在STS上实现了新的最先进的。我们的模型在https://tfhub.dev/google/collections/sentence-t5/1发布。
translated by 谷歌翻译
查询聚焦的文本摘要(QFTS)任务旨在构建基于给定查询的文本文档摘要的构建系统。解决此任务的关键挑战是缺乏培训摘要模型的大量标记数据。在本文中,我们通过探索一系列域适应技术来解决这一挑战。鉴于最近在广泛的自然语言处理任务中进行预先接受的变压器模型的成功,我们利用此类模型为单文档和多文件方案的QFTS任务产生抽象摘要。对于域适应,我们使用预先训练的变压器的摘要模型应用了各种技术,包括转移学习,弱监督学习和远程监督。六个数据集的广泛实验表明,我们所提出的方法非常有效地为QFTS任务产生抽象摘要,同时在一组自动和人类评估指标上设置新的最先进的结果。
translated by 谷歌翻译
在本文中,我们利用了以前的预训练模型(PTM)的优势,并提出了一种新型的中国预训练的不平衡变压器(CPT)。与以前的中国PTM不同,CPT旨在利用自然语言理解(NLU)和自然语言生成(NLG)之间的共同知识来促进表现。 CPT包括三个部分:共享编码器,一个理解解码器和一代解码器。具有共享编码器的两个特定解码器分别通过蒙版语言建模(MLM)进行了预训练,并分别将自动编码(DAE)任务进行了验证。借助部分共享的体系结构和多任务预培训,CPT可以(1)使用两个解码器学习NLU或NLG任务的特定知识,并且(2)对模型的潜力充分利用了微调。此外,不平衡的变压器节省了计算和存储成本,这使CPT竞争激烈,并极大地加速了文本生成的推断。对各种中国NLU和NLG任务的实验结果显示了CPT的有效性。
translated by 谷歌翻译
知识密集型语言任务(苏格兰信)通常需要大量信息来提供正确的答案。解决此问题的一种流行范式是将搜索系统与机器读取器相结合,前者检索支持证据,后者检查它们以产生答案。最近,读者组成部分在大规模预培养的生成模型的帮助下见证了重大进展。同时,搜索组件中的大多数现有解决方案都依赖于传统的``索引 - retrieve-then-Rank''管道,该管道遭受了巨大的内存足迹和端到端优化的困难。受到最新构建基于模型的IR模型的努力的启发,我们建议用新颖的单步生成模型替换传统的多步搜索管道,该模型可以极大地简化搜索过程并以端到端的方式进行优化。我们表明,可以通过一组经过适当设计的预训练任务来学习强大的生成检索模型,并被采用以通过进一步的微调来改善各种下游苏格兰短裙任务。我们将预训练的生成检索模型命名为Copusbrain,因为有关该语料库的所有信息均以其参数进行编码,而无需构造其他索引。经验结果表明,在苏格兰语基准上的检索任务并建立了新的最新性能,Copusbrain可以极大地超过强大的基准。我们还表明,在零农源和低资源设置下,科体班运行良好。
translated by 谷歌翻译
在本文中,我们探讨了构建统一基础模型的可能性,该模型可以适应愿景和仅文本任务。从BERT和VIT开始,我们设计一个由模态特定标记,共享变压器编码器和任务特定的输出头组成的统一变压器。为了有效地预先列车在未配对的图像和文本上联合培训拟议的模型,我们提出了两种新颖的技术:(i)我们使用单独培训的BERT和VIT模型作为老师,并应用知识蒸馏,为关节提供额外的准确的监督信号训练; (ii)我们提出了一种新颖的渐变掩蔽策略,以平衡图像和文本预培训损失的参数更新。我们通过微调它分别在图像分类任务和自然语言理解任务上进行微调,评估联合预训练的变压器。实验表明,由此产生的统一基础变压器令人惊讶地在视觉和仅文本任务中令人惊讶地令人惊讶,并且所提出的知识蒸馏和梯度掩蔽策略可以有效地提升分别训练模型水平的性能。
translated by 谷歌翻译
本文通过将深度递归编码器添加到具有深递归编码器(BERT-DRE)的伯爵,提供了一种深度神经阵列匹配(NLSM)。我们对模型行为的分析表明,BERT仍未捕获文本的全部复杂性,因此伯特顶部应用了一个深递归编码器。具有残留连接的三个Bi-LSTM层用于设计递归编码器,并在此编码器顶部使用注意模块。为了获得最终的载体,使用由平均值和最大池组成的池化层。我们在四个基准,SNLI,贝尔船,Multinli,Scitail和新的波斯宗教问题数据集上进行模型。本文侧重于改善NLSM任务中的BERT结果。在这方面,进行BERT-DRE和BERT之间的比较,并且显示在所有情况下,BERT-DRE优于伯特。宗教数据集的BERT算法实现了89.70%的精度,并且BERT-DRE架构使用相同的数据集提高了90.29%。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
This paper presents ReasonFormer, a unified reasoning framework for mirroring the modular and compositional reasoning process of humans in complex decision-making. Inspired by dual-process theory in cognitive science, the representation module (automatic thinking) and reasoning modules (controlled thinking) are decoupled to capture different levels of cognition. Upon the top of the representation module, the pre-trained reasoning modules are modular and professional in specific and fundamental reasoning skills (e.g., logic, simple QA, etc). To mimic the controlled compositional thinking process, different reasoning modules are dynamically activated and composed in both parallel and cascaded manners to control what reasoning skills are activated and how deep the reasoning process will be reached to solve the current problems. The unified reasoning framework solves multiple tasks with a single model, and is trained and inferred in an end-to-end manner. Evaluated on 11 datasets requiring different reasoning skills and complexity, ReasonFormer demonstrates substantial performance boosts, revealing the compositional reasoning ability. Few-shot experiments exhibit better generalization ability by learning to compose pre-trained skills for new tasks with limited data, and decoupling the representation module and the reasoning modules. Further analysis shows the modularity of reasoning modules as different tasks activate distinct reasoning skills at different reasoning depths.
translated by 谷歌翻译