Controlled text generation is a very important task in the arena of natural language processing due to its promising applications. In order to achieve this task we mainly introduce the novel soft prompt tuning method of using soft prompts at both encoder and decoder levels together in a T5 model and investigate the performance as the behaviour of an additional soft prompt related to the decoder of a T5 model in controlled text generation remained unexplored. Then we also investigate the feasibility of steering the output of this extended soft prompted T5 model at decoder level and finally analyse the utility of generated text to be used in AI related tasks such as training AI models with an interpretability analysis of the classifier trained with synthetic text, as there is a lack of proper analysis of methodologies in generating properly labelled data to be utilized in AI tasks. Through the performed in-depth intrinsic and extrinsic evaluations of this generation model along with the artificially generated data, we found that this model produced better results compared to the T5 model with a single soft prompt at encoder level and the sentiment classifier trained using this artificially generated data can produce comparable classification results to the results of a classifier trained with real labelled data and also the classifier decision is interpretable with respect to the input text content.
translated by 谷歌翻译
In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signals from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's few-shot learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant because large models are costly to share and serve and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed "prefix tuning" of Li and Liang (2021) and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient "prompt ensembling." * Work done as a Google AI Resident.
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
GPT-3等大型语言模型是优秀的几次学习者,允许他们通过自然文本提示来控制。最近的研究报告称,基于及时的直接分类消除了对微调的需求,但缺乏数据和推理可扩展性。本文提出了一种新的数据增强技术,利用大规模语言模型来生成来自真实样本的混合的现实文本样本。我们还建议利用语言模型预测的软标签,从大规模语言模型中有效地蒸馏知识并同时创建文本扰动。我们对各种分类任务进行数据增强实验,并显示我们的方法非常优于现有的文本增强方法。消融研究和定性分析为我们的方法提供了更多的见解。
translated by 谷歌翻译
在这项工作中,我们证明了多种语的大规模序列到序列(SEQ2SEQ)模型,该模型是通过Denoising和因果语言建模(CLM)任务的混合物进行训练的,比仅解码器模型更有效地进行了效率的学习者在各种任务上。特别是,我们培训了一个名为Alexa教师模型(Alexatm 20b)的200亿个参数多语言SEQ2SEQ模型,并表明它在1-Shot摘要任务上实现了最先进的(SOTA)性能,超过了更大的540B PALM DOPODER模型。 Alexatm 20b还可以在1-Shot Machine翻译中实现SOTA,尤其是对于低资源语言,几乎所有语言对(阿拉伯语,英语,法语,德语,德语,印地语,意大利语,日语,以及flores-101数据集上的泰卢固语)。我们还显示了零拍设置,AlexATM 20B在SuperGlue和SqueadV2数据集上的表现优于GPT3(175B),并在XNLI,XCOPA,PAWS-X和XWINOGRAD等多语言任务上提供SOTA性能。总体而言,我们的结果为SEQ2SEQ模型提供了一个令人信服的案例,作为大型语言模型(LLM)培训的仅解码器模型的强大替代方法。
translated by 谷歌翻译
数据增强是通过转换为机器学习的人工创建数据的人工创建,是一个跨机器学习学科的研究领域。尽管它对于增加模型的概括功能很有用,但它还可以解决许多其他挑战和问题,从克服有限的培训数据到正规化目标到限制用于保护隐私的数据的数量。基于对数据扩展的目标和应用的精确描述以及现有作品的分类法,该调查涉及用于文本分类的数据增强方法,并旨在为研究人员和从业者提供简洁而全面的概述。我们将100多种方法划分为12种不同的分组,并提供最先进的参考文献来阐述哪种方法可以通过将它们相互关联,从而阐述了哪种方法。最后,提供可能构成未来工作的基础的研究观点。
translated by 谷歌翻译
Pre-trained large language models can efficiently interpolate human-written prompts in a natural way. Multitask prompted learning can help generalization through a diverse set of tasks at once, thus enhancing the potential for more effective downstream fine-tuning. To perform efficient multitask-inference in the same batch, parameter-efficient fine-tuning methods such as prompt tuning have been proposed. However, the existing prompt tuning methods may lack generalization. We propose SPT, a semi-parametric prompt tuning method for multitask prompted learning. The novel component of SPT is a memory bank from where memory prompts are retrieved based on discrete prompts. Extensive experiments, such as (i) fine-tuning a full language model with SPT on 31 different tasks from 8 different domains and evaluating zero-shot generalization on 9 heldout datasets under 5 NLP task categories and (ii) pretraining SPT on the GLUE datasets and evaluating fine-tuning on the SuperGLUE datasets, demonstrate effectiveness of SPT.
translated by 谷歌翻译
将文本插入段落中指定位置的任务(称为空白(FITB))对于各种应用程序与作家与自然语言生成(NLG)系统互动以制作文本的应用很有用。虽然先前的工作已经通过专门培训的模型来解决此问题,但更有用的模型是可以有效地执行_both_ fitb和延续的模型。在这项工作中,我们评估了使用单个模型完成这两个任务的可行性。我们表明,通过FITB式目标进行预训练的模型都可以完成这两个任务,而预先训练的持续训练的模型却没有。最后,我们展示了如何轻松地对FITB模型进行填充,以允许对一代的长度和单词选择进行细粒度的控制。
translated by 谷歌翻译
Large pretrained language models generate fluent text but are notoriously hard to controllably sample from. In this work, we study constrained sampling from such language models: generating text that satisfies user-defined constraints, while maintaining fluency and the model's performance in a downstream task. We propose MuCoLa -- a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner. Specifically, it initializes the entire output sequence with noise and follows a Markov chain defined by Langevin Dynamics using the gradients of the energy function. We evaluate MuCoLa on text generation with soft and hard constraints as well as their combinations obtaining significant improvements over competitive baselines for toxicity avoidance, sentiment control, and keyword-guided generation.
translated by 谷歌翻译
当前的语言模型达到了较低的困惑,但其产生的几代人仍然遭受有毒的反应,重复性和矛盾。标准语言建模设置无法解决这些问题。在本文中,我们介绍了一个新的体系结构{\ sc导演},由一个统一的生成器分类器组成,具有语言建模和每个输出令牌的分类头。培训是使用标准语言建模数据共同进行的,并以所需和不良序列标记的数据。与标准语言模型相比,该模型在多种设置中的实验表明,该模型具有竞争性的培训和解码速度,同时产生了较高的结果,从而减轻了已知的问题,同时保持发电质量。就准确性和效率而言,它还优于现有的模型指导方法。
translated by 谷歌翻译
及时调整是将预训练模型调整到下游任务的极其有效的工具。但是,基于标准及时的方法主要考虑下游任务的足够数据的情况。目前尚不清楚是否可以将优势传输到几杆式制度,在每个下游任务中只有有限的数据。尽管有些作品证明了在几次弹奏设置下及时调整的潜力,但通过搜索离散提示或使用有限数据调整软提示的主流方法仍然非常具有挑战性。通过广泛的实证研究,我们发现迅速调整和完全微调之间的学习差距仍然存在差距。为了弥合差距,我们提出了一个新的及时调整框架,称为软模板调整(STT)。 STT结合了手册和自动提示,并将下游分类任务视为掩盖语言建模任务。对不同设置的全面评估表明,STT可以在不引入其他参数的情况下缩小微调和基于及时的方法之间的差距。值得注意的是,它甚至可以胜过情感分类任务的时间和资源消耗的微调方法。
translated by 谷歌翻译
最近的几种方法,例如参数有效的微调(PEFT)和模式开发训练(PET),在标签筛选设置中取得了令人印象深刻的结果。但是,它们很难使用,因为它们会受到手动制作的提示的高度可变性,并且通常需要十亿参数语言模型才能达到高精度。为了解决这些缺点,我们提出了SETFIT(句子变压器微调),这是一个有效且迅速的框架,用于对句子变形金刚(ST)进行几次微调。 SetFit首先以对比的暹罗方式对少数文本对进行微调验证的st。然后将所得模型用于生成丰富的文本嵌入,这些嵌入方式用于训练分类头。这个简单的框架不需要任何提示或口头化,并且比现有技术少的参数较少,因此可以实现高精度。我们的实验表明,SetFit通过PEFT和PET技术获得了可比的结果,同时训练的速度更快。我们还表明,SETFIT可以通过简单地切换ST主体来应用于多语言设置。我们的代码可从https://github.com/huggingface/setFit以及我们的数据集获得,网址为https://huggingface.co/setfit。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
预先接受的语言模型(PLM)在神经对话建模中标志着巨大的飞跃。虽然PLMS在大型文本语料库上进行预先培训,但通常在具有特定领域知识和对话风格的稀缺对话数据上进行微调。然而,在大型预先训练模型中充分利用现有知识的同时定制语言模型仍然是一个挑战。在本文中,我们提出了一种预先接受训练的对话建模的新方法,将对话生成问题作为一个快速学习任务。而不是在有限的对话数据上进行微调,我们的方法,DialogPrompt学习针对对话背景优化的连续提示嵌入,从而从大型预训练模型中促进了知识。为了鼓励模型更好地利用提示嵌入,提示编码器被设计为在输入对话框上下文中的条件。流行对话数据集的实验表明,我们的方法显着优于微调基线和通用及时学习方法。此外,人类评估强烈支持对DialialPrompt的优越性在响应生成质量方面。
translated by 谷歌翻译
大型语言模型(例如GPT-3(Brown等,2020)可以执行任意任务,而无需在仅使用少数标签示例的提示之后进行微调。可以将任意任务重新构成自然语言提示,并且可以要求语言模型生成完成,并以称为基于及时的学习的范式间接执行该任务。迄今为止,主要针对单向语言模型证明了新兴迅速的学习能力。但是,预先培训的双向语言模型(例如蒙版语言建模)为转移学习提供了更强大的学习表示。这激发了促使双向模型的可能性,但是它们的预训练目标使它们与现有的提示范式不相容。我们提出SAP(顺序自动回旋提示),该技术可以使双向模型提示。利用机器翻译任务作为案例研究,我们提示了带有SAP的双向MT5模型(Xue等,2021),并演示其少量拍摄和零照片的翻译优于GPT-3等单向模型的几个单拍翻译和XGLM(Lin等,2021),尽管MT5的参数减少了约50%。我们进一步表明SAP对问题的回答和摘要有效。我们的结果首次表明基于及时的学习是更广泛的语言模型的新兴属性,而不仅仅是单向模型。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
Much of named entity recognition (NER) research focuses on developing dataset-specific models based on data from the domain of interest, and a limited set of related entity types. This is frustrating as each new dataset requires a new model to be trained and stored. In this work, we present a ``versatile'' model -- the Prompting-based Unified NER system (PUnifiedNER) -- that works with data from different domains and can recognise up to 37 entity types simultaneously, and theoretically it could be as many as possible. By using prompt learning, PUnifiedNER is a novel approach that is able to jointly train across multiple corpora, implementing intelligent on-demand entity recognition. Experimental results show that PUnifiedNER leads to significant prediction benefits compared to dataset-specific models with impressively reduced model deployment costs. Furthermore, the performance of PUnifiedNER can achieve competitive or even better performance than state-of-the-art domain-specific methods for some datasets. We also perform comprehensive pilot and ablation studies to support in-depth analysis of each component in PUnifiedNER.
translated by 谷歌翻译
最近已被证明大型语言模型在各种任务集中获得合理的零射普通化(Brown等,2020)。它已经假设这是语言模型的隐式多任务学习的结果,在语言模型中的预押(Radford等,2019)。可以通过明确的多任务学习直接引起零拍常规化?为了以缩放测试这个问题,我们开发一个系统,以便轻松地将任何自然语言任务映射到人类可读的提示表单中。我们转换一组大量的监督数据集,每个数据集都有多个提示,具有不同的措辞。这些提示的数据集允许基准测试模型执行完全看不见的任务的能力。我们介绍了一个普拉克尔编码器 - 解码器模型(Raffel等,2020; Lester等,2021),覆盖各种任务。该模型在多个标准数据集中达到强大的零点性能,通常优于其尺寸的型号超过16倍。此外,我们的方法对来自Big-替补基准测试的任务子集具有强烈性能,优于其尺寸的6倍。所有提示和培训的型号都可以在https://github.com/ bigscience-workshop / protectsource / httpsource / https://huggingface.co/bigscience/t0pp。
translated by 谷歌翻译
在许多机器学习的情况下,研究表明,培训数据的开发可能比分类器本身的选择和建模更高。因此,已经开发了数据增强方法来通过人为创建的培训数据来改善分类器。在NLP中,为提供新的语言模式的文本转换建立通用规则存在挑战。在本文中,我们介绍并评估一种适合于长期和短文的分类器的性能的文本生成方法。通过我们的文本生成方法的增强,我们在评估简短和长期文本任务时取得了令人鼓舞的改进。尤其是在小型数据分析方面,与NO增强基线和其他数据增强技术相比,在构建的低数据状态下,添加精度的提高到达15.53%和3.56%。由于这些构建制度的当前轨道并非普遍适用,因此我们还显示了几个现实世界中低数据任务(高达+4.84 F1得分)的重大改进。由于我们从许多角度(总共11个数据集)评估了该方法,因此我们还观察到该方法可能不合适的情况。我们讨论了在不同类型的数据集上成功应用我们的方法的含义和模式。
translated by 谷歌翻译
快速学习已成为现代自然语言处理的新范式,它直接适应培训的语言模型(PLMS)到$ CLOZE $ -Style预测,自回归建模或序列到序列生成,从而导致各种任务的表现。但是,尚未提出及时学习的标准实施框架,以及大多数现有的及时学习码条,通常是不受管制的,仅为特定方案提供有限的实现。由于有许多细节,例如模板策略,初始化策略和语言化策略等,因此需要在快速学习中考虑,从业者面临障碍,以便快速调整所需的迅速学习方法到他们的应用程序。在本文中,我们展示了{OpenPrompt},一个统一的易于使用的工具包,可以通过PLMS快速学习。 OpenPrompt是一项研究型框架,配备了效率,模块化和可扩展性,其组合性允许自由地将不同的PLMS,任务格式和提示模块组合在统一的范例中。用户可以宽松地部署快速学习框架,并在没有约束的情况下在不同的NLP任务上评估它们的泛化。 OpenPrompt在{\ url {https://github.com/thunlp/openprompt}}上公开发布。
translated by 谷歌翻译