从自我监督学习(SSL)模型中学到的语音表示可以使各种语音处理任务受益。但是,利用SSL表示通常需要微调预训练的模型或设计特定任务的下游模型和损失功能,从而导致大量记忆使用和人工劳动。最近,发现自然语言处理(NLP)的提示是一种有效的技术来利用预训练的语言模型(LMS)。具体而言,及时调整通过固定的预训练模型优化了有限数量的特定于任务参数。结果,每个任务只需要存储一小部分参数。迅速调整通过利用预先训练的LM的预测能力来提高计算和内存效率。尽管如此,在演讲社区中很少研究这种范式。我们在本文中报告了基于生成语言模型(GSLM)的语音处理任务的及时调整范式的首次探索。实验结果表明,及时的调整技术在语音分类任务中实现竞争性能,而可训练的参数少于微调专门的下游模型。我们进一步研究了具有挑战性的序列生成任务的技术。及时调整还证明了其潜力,同时在本文中讨论了限制和可能的研究方向。源代码可在https://github.com/ga642381/speechprompt上获得。
translated by 谷歌翻译
及时调整是将预训练模型调整到下游任务的极其有效的工具。但是,基于标准及时的方法主要考虑下游任务的足够数据的情况。目前尚不清楚是否可以将优势传输到几杆式制度,在每个下游任务中只有有限的数据。尽管有些作品证明了在几次弹奏设置下及时调整的潜力,但通过搜索离散提示或使用有限数据调整软提示的主流方法仍然非常具有挑战性。通过广泛的实证研究,我们发现迅速调整和完全微调之间的学习差距仍然存在差距。为了弥合差距,我们提出了一个新的及时调整框架,称为软模板调整(STT)。 STT结合了手册和自动提示,并将下游分类任务视为掩盖语言建模任务。对不同设置的全面评估表明,STT可以在不引入其他参数的情况下缩小微调和基于及时的方法之间的差距。值得注意的是,它甚至可以胜过情感分类任务的时间和资源消耗的微调方法。
translated by 谷歌翻译
In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signals from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's few-shot learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant because large models are costly to share and serve and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed "prefix tuning" of Li and Liang (2021) and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient "prompt ensembling." * Work done as a Google AI Resident.
translated by 谷歌翻译
Pre-trained large language models can efficiently interpolate human-written prompts in a natural way. Multitask prompted learning can help generalization through a diverse set of tasks at once, thus enhancing the potential for more effective downstream fine-tuning. To perform efficient multitask-inference in the same batch, parameter-efficient fine-tuning methods such as prompt tuning have been proposed. However, the existing prompt tuning methods may lack generalization. We propose SPT, a semi-parametric prompt tuning method for multitask prompted learning. The novel component of SPT is a memory bank from where memory prompts are retrieved based on discrete prompts. Extensive experiments, such as (i) fine-tuning a full language model with SPT on 31 different tasks from 8 different domains and evaluating zero-shot generalization on 9 heldout datasets under 5 NLP task categories and (ii) pretraining SPT on the GLUE datasets and evaluating fine-tuning on the SuperGLUE datasets, demonstrate effectiveness of SPT.
translated by 谷歌翻译
We describe PromptBoosting, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM's parameters, gradients, or hidden representations. This form of "black-box" classifier training has become increasingly important as the cost of training and inference in large-scale LMs grows. But existing black-box LM classifier learning approaches are themselves computationally inefficient, typically specializing LMs to the target task by searching in a large space of (discrete or continuous) prompts using zeroth-order optimization methods. Instead of directly optimizing in prompt space, PromptBoosting obtains a small pool of prompts via a gradient-free approach and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM's output distribution. These weak learners are then ensembled using the AdaBoost algorithm. The entire learning process requires only a small number of forward passes and no backward pass. Experiments show that PromptBoosting achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.
translated by 谷歌翻译
预训练模型已在许多代码智能任务中有效。这些模型在大规模未标记的语料库中进行了预训练,然后在下游任务中进行了微调。但是,由于预训练和下游任务的输入是不同的形式,因此很难充分探索预训练模型的知识。此外,微调的性能强烈依赖于下游数据的量,而实际上,具有稀缺数据的场景很常见。自然语言处理(NLP)领域的最新研究表明,迅速调整,一种调整的新范式,减轻上述问题并在各种NLP任务中实现了有希望的结果。在迅速调整中,在调整过程中插入的提示提供了特定于任务的知识,这对于具有相对较少数据的任务特别有益。在本文中,我们凭经验评估了代码智能任务中迅速调整的用法和效果。我们对流行的预训练模型Codebert和codet5进行及时调整,并尝试三个代码智能任务,包括缺陷预测,代码摘要和代码翻译。我们的实验结果表明,在所有三个任务中,迅速调整始终优于微调。此外,及时调整在低资源场景中显示出很大的潜力,例如,对于代码摘要,平均将微调的BLEU分数提高了26%以上。我们的结果表明,我们可以调整代码智能任务的迅速调整,以实现更好的性能,尤其是在缺乏特定于任务的数据时,我们可以调整及时调整。
translated by 谷歌翻译
在本文中,我们描述了我们参与Case-2022的子任务1,即与休闲新闻语料库的事件因果关系识别。我们通过在少数带注释的示例(即几次配置)上利用一组简单但互补的技术来解决因果关系识别(CRI)任务。我们遵循一种基于迅速的预测方法,用于微调LMS,其中CRI任务被视为掩盖语言建模问题(MLM)。这种方法允许LMS在MLM问题上进行本地预先训练,可以直接生成对CRI特异性提示的文本响应。我们将此方法的性能与在整个数据集中训练的集合技术进行比较。我们表现​​最佳的提交仅接受了每班256个实例,整个数据集的一小部分培训,但能够获得第二好的精度(0.82),第三好的精度(0.82)和F1得分。 (0.85)非常接近获胜者团队(0.86)的报道。
translated by 谷歌翻译
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context. Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient. We present LM-BFF-better few-shot fine-tuning of language models 1 -a suite of simple and complementary techniques for finetuning language models on a small number of annotated examples. Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context. Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression. Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks. Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning. 2 * The first two authors contributed equally. 1 Alternatively, language models' best friends forever. 2 Our implementation is publicly available at https:// github.com/princeton-nlp/LM-BFF.
translated by 谷歌翻译
Sequence-to-sequence (seq2seq) learning is a popular fashion for large-scale pretraining language models. However, the prior seq2seq pretraining models generally focus on reconstructive objectives on the decoder side and neglect the effect of encoder-side supervision, which we argue may lead to sub-optimal performance. To verify our hypothesis, we first empirically study the functionalities of the encoder and decoder in seq2seq pretrained language models, and find that the encoder takes an important but under-exploitation role than the decoder regarding the downstream performance and neuron activation. Therefore, we propose an encoding-enhanced seq2seq pretraining strategy, namely E2S2, which improves the seq2seq models via integrating more efficient self-supervised information into the encoders. Specifically, E2S2 adopts two self-supervised objectives on the encoder side from two aspects: 1) locally denoising the corrupted sentence (denoising objective); and 2) globally learning better sentence representations (contrastive objective). With the help of both objectives, the encoder can effectively distinguish the noise tokens and capture high-level (i.e. syntactic and semantic) knowledge, thus strengthening the ability of seq2seq model to accurately achieve the conditional generation. On a large diversity of downstream natural language understanding and generation tasks, E2S2 dominantly improves the performance of its powerful backbone models, e.g. BART and T5. For example, upon BART backbone, we achieve +1.1% averaged gain on the general language understanding evaluation (GLUE) benchmark and +1.75% F_0.5 score improvement on CoNLL2014 dataset. We also provide in-depth analyses to show the improvement stems from better linguistic representation. We hope that our work will foster future self-supervision research on seq2seq language model pretraining.
translated by 谷歌翻译
快速学习已成为现代自然语言处理的新范式,它直接适应培训的语言模型(PLMS)到$ CLOZE $ -Style预测,自回归建模或序列到序列生成,从而导致各种任务的表现。但是,尚未提出及时学习的标准实施框架,以及大多数现有的及时学习码条,通常是不受管制的,仅为特定方案提供有限的实现。由于有许多细节,例如模板策略,初始化策略和语言化策略等,因此需要在快速学习中考虑,从业者面临障碍,以便快速调整所需的迅速学习方法到他们的应用程序。在本文中,我们展示了{OpenPrompt},一个统一的易于使用的工具包,可以通过PLMS快速学习。 OpenPrompt是一项研究型框架,配备了效率,模块化和可扩展性,其组合性允许自由地将不同的PLMS,任务格式和提示模块组合在统一的范例中。用户可以宽松地部署快速学习框架,并在没有约束的情况下在不同的NLP任务上评估它们的泛化。 OpenPrompt在{\ url {https://github.com/thunlp/openprompt}}上公开发布。
translated by 谷歌翻译
This work introduces a new multi-task, parameter-efficient language model (LM) tuning method that learns to transfer knowledge across different tasks via a mixture of soft prompts-small prefix embedding vectors pre-trained for different tasks. Our method, called ATTEMPT (ATTEntional Mixtures of Prompt Tuning), obtains source prompts as encodings of large-scale source tasks into a small number of parameters and trains an attention module to interpolate the source prompts and a newly initialized target prompt for every instance in the target task. During training, only the target task prompt and the attention weights, which are shared between tasks in multi-task training, are updated, while the original LM and source prompts are intact. ATTEMPT is highly parameter-efficient (e.g., updates 2,300 times fewer parameters than full fine-tuning) while achieving high task performance using knowledge from high-resource tasks. Moreover, it is modular using pre-trained soft prompts, and can flexibly add or remove source prompts for effective knowledge transfer. Our experimental results across 21 diverse NLP datasets show that ATTEMPT significantly outperforms prompt tuning and outperforms or matches fully fine-tuned or other parameter-efficient tuning approaches that use over ten times more parameters. Finally, ATTEMPT outperforms previous work in few-shot learning settings.
translated by 谷歌翻译
Spoken language understanding (SLU) is a task aiming to extract high-level semantics from spoken utterances. Previous works have investigated the use of speech self-supervised models and textual pre-trained models, which have shown reasonable improvements to various SLU tasks. However, because of the mismatched modalities between speech signals and text tokens, previous methods usually need complex designs of the frameworks. This work proposes a simple yet efficient unsupervised paradigm that connects speech and textual pre-trained models, resulting in an unsupervised speech-to-semantic pre-trained model for various tasks in SLU. To be specific, we propose to use unsupervised automatic speech recognition (ASR) as a connector that bridges different modalities used in speech and textual pre-trained models. Our experiments show that unsupervised ASR itself can improve the representations from speech self-supervised models. More importantly, it is shown as an efficient connector between speech and textual pre-trained models, improving the performances of five different SLU tasks. Notably, on spoken question answering, we reach the state-of-the-art result over the challenging NMSQA benchmark.
translated by 谷歌翻译
最近,先驱工作发现,演讲预训练模型可以解决全堆栈语音处理任务,因为该模型利用底层学习扬声器相关信息和顶层以编码与内容相关的信息。由于网络容量有限,我们认为如果模型专用于音频内容信息学习,则可以进一步提高语音识别性能。为此,我们向自我监督学习(ILS-SSL)提出中间层监督,这将模型通过在中间层上添加额外的SSL丢失来尽可能地专注于内容信息。 LibrisPeech测试 - 其他集合的实验表明,我们的方法显着优于Hubert,这实现了基数/大型模型的W / O语言模型设置的相对字错误率降低了23.5%/ 11.6%。详细分析显示我们模型的底层与拼音单元具有更好的相关性,这与我们的直觉一致,并解释了我们对ASR的方法的成功。
translated by 谷歌翻译
动物的生物智能系统通过将信息与各种任务同时整合在不同的方式和处理中的信息。相比之下,当前的机器学习研究遵循一个特定于任务的范例,导致任务与开发新任务的感知模型的高度边际成本之间的负面合作。在本文中,我们展示了一个名为Uni-Perceiver的通用感知体系结构,其处理各种模型和任务,具有统一的建模和共享参数。具体而言,UNI-Perceiver将从任意模态的不同的任务输入和目标进行编码为具有模态 - 不可变换器编码器和轻量级模式特定标记的统一表示空间。不同的感知任务被建模为相同的配方,即通过其表示的相似性找到每个输入的最大可能性目标。该模型在多个单模和多模态任务上预先培训,并在各种下游任务上进行评估,包括在预训练阶段中未出现的新任务。结果表明,我们没有任何调整的预先训练的模型即使在新的任务上也可以实现合理的性能。通过在下游任务数据的1%上进行提示调整,可以将性能提高到接近最先进的方法的水平。全数据微调进一步提供结果与最先进的结果相提并论。代码应释放。
translated by 谷歌翻译
GPT-3显示了培训的大规模语言模型(LMS)的卓越情调学习能力,培训数十亿规模数据。在这里,我们解决了GPT-3纸张报告的一些剩余问题,例如非英语LM,不同大小模型的性能,以及最近引入的迅速优化对上下文学习的效果。为实现这一目标,我们介绍了HyperClova,一个韩国VPT-3的韩国变体训练在一个以韩国为中心的560b标准的令牌。通过我们的韩国特定标记化,HyperClova与我们的培训配置增强,显示了韩国各种下游任务的最先进的上下游零射击和几秒钟学习表演。此外,我们展示了基于及时的学习的性能优势,并演示如何集成到迅速的工程管道中。然后,我们讨论了通过引入Hyperclova Studio,互动提示工程界面向ML的非专家提供AI原型设计能力来实现No Code AI范例的可能性。最后,我们展示了我们具有三个成功的内部应用程序的方法的潜力。
translated by 谷歌翻译
及时调整是将预训练的语言模型调整为下游任务的一种新兴方法。但是,现有的研究主要是为输入序列增加提示。由于中间多头自我注意和馈送网络计算,因此这种方式无法正常工作,从而使模型优化不是很好。因此,我们提出了一种称为“图层调整”的新颖调整方式,旨在在变压器层中添加可学习的参数。具体而言,我们专注于变压器中的馈电网络的图层调整,即FLANing。它将其他单元引入每个馈送网络的隐藏层。我们对公共线索基准进行了广泛的实验。结果表明:1)在几乎所有情况下,我们的FL-tuning tospormports促进了全数据和少量设置下的调整方法。特别是,它在WSC 1.0上的准确性提高了17.93%(全数据设置),而F1上的精度则提高了P-Tuning V2上的Cluener上的精度(几乎没有射击设置)。 2)我们的FL-调整更稳定,收敛速度比P-Tuning V2快约1.17倍。 3)只有大约3%的变压器参数要训练,因此在大多数数据集中进行了微调,并且在几个数据集上的微调(例如,WSC 1.1上的准确性提高了12.9%)。源代码可从https://github.com/genggui001/fl-tuning获得。
translated by 谷歌翻译
最近,与“预训练,及时和预测”的新范式相比,与“预训练,微调”范式相比,新的范式“预训练,及时和预测”取得了显着的成就。在基于及时的GPT-3成功之后,一系列基于蒙版的语言模型(MLM)(例如Bert,Roberta)及时学习方法变得流行并广泛使用。但是,另一个有效的预训练的判别模型Electra可能被忽略了。在本文中,我们尝试使用拟议的替换代替令牌检测(RTD)基于基于的及时学习方法来完成零摄像的几个NLP任务。实验结果表明,基于RTD-Prompt学习的Electra模型可达到令人惊讶的最先进的零拍性能。在数字上,与MLM-Roberta-Large和MLM-Bert-Large相比,我们的RTD-Electra-Large在所有15个任务上平均提高了约8.4%和13.7%。特别是在SST-2任务上,我们的RTD-Electra-Large在没有任何培训数据的情况下达到了令人惊讶的90.1%精度。总体而言,与预先训练的蒙版语言模型相比,预先训练的代替令牌检测模型在零拍学习中的性能更好。因此,Electra是一位出色的零球学习者。源代码可在以下网址获得:https://github.com/nishiwen1214/rtd-electra。
translated by 谷歌翻译
由于面向任务对话框(TOD)系统中不同模块的标签成本很高,但实践中的主要挑战是学习具有最少标记数据的不同任务。最近,通过预先训练的语言模型(PLMS)提示方法对TOD的几次射门学习显示了有希望的结果。为了更好地利用PLMS的力量,本文提出了利用额外任务特定指令的全面指导(CINS)。我们在TOD中设计了指令的架构(定义,约束,提示)指令及其定制实现,即在TOD中的三个重要下游任务,即意图分类,对话状态跟踪和自然语言生成。采用序列到序列模型(T5)来解决统一框架中的这三个任务。在具有小验证数据的现实少量学习场景中对这些TOD任务进行了广泛的实验。经验结果表明,所提出的CINS方法一致地改善了Finetune PLM具有原始输入或简短提示的技术。
translated by 谷歌翻译
自动语音识别(ASR)系统已经发现它们在非常多样化的域中的众多工业应用中使用。由于域 - 特定于域的系统比域名评估的通用对应力更好,因此对内存和计算有效的域适应的需要是显而易见的。特别是,适用用于救援ASR假设的基于参数的基于变压器的语言模型是具有挑战性的。在这项工作中,我们引入域提示,一种方法,该方法列举了少数域令牌嵌入参数以将基于变压器的LM归入特定域。只需少数额外的额外参数,我们通过使用未存在的LM的基线达到7-14%的效率。尽管具有参数效率,但这些改进与具有数亿参数的完全精细调谐模型的改进相当。通过提示,数据集大小,初始化和域的消融,我们提供了在ASR系统中使用域提示的优势的证据。
translated by 谷歌翻译
大多数NER方法都依赖于广泛的标记数据进行模型培训,这些数据在低资源场景中挣扎,培训数据有限。与资源丰富的源域相比,现有的主要方法通常会遇到目标域具有不同标签集的挑战,该标签集可以作为类传输和域转移得出的结论。在本文中,我们通过可拔出的提示(Lightner)提出了一个轻巧的调整范式,用于低资源。具体而言,我们构建了实体类别的统一可学习的语言器,以生成实体跨度序列和实体类别,而无需任何标签特定的分类器,从而解决了类转移问题。我们通过将可学习的参数纳入自我发言层作为指导,进一步提出了一个可插入的指导模块,该参数可以重新调节注意力并调整预训练的权重。请注意,我们仅通过修复了预训练的语言模型的整个参数来调整那些插入的模块,从而使我们的方法轻巧且灵活地适合低资源场景,并且可以更好地跨域传输知识。实验结果表明,Lightner可以在标准监督环境中获得可比的性能,并且在低资源设置中优于强大基线。代码在https://github.com/zjunlp/deepke/tree/main/main/example/ner/few-shot中。
translated by 谷歌翻译