通过微调调整大型预训练模型(PTM)会施加过刺激的计算和存储负担。对参数有效调整(PET)的最新研究发现,与常规微调相比,仅优化以PTM为条件的一小部分参数才能产生PAR性能。通常,PET方法精确设计参数有效的模块(PET模块)可以应用于PTMS内部的任意细粒位置。但是,这些细粒度位置的有效性很大程度上依赖于复杂的手动指定,因此通常会产生次优的结果。与手动指定相反,我们以自动方式探索构建宠物模块。我们将自动\ textbf {s} earch \ textbf {s} parse \ textbf {s} \ textbf {p} arameter- \ textbf {e} fficbf {e} fficient \ textbf {t textbf {t} uning(s $^3 $ pet) 。基于各种PET方法的统一框架,S $^3 $ PET通过双层优化进行了可区分的PET结构搜索,并提出了移动的全局Sigmoid方法,以明确控制可训练的参数的数量。广泛的实验表明,S $^3 $ PET超过了具有较低训练参数的手册和随机结构。搜索结构可保留99 \%的微调性能,具有0.01 \%可训练的参数。此外,S $^3 $ PET的优势通过极低的训练参数预算(0.0009 \%$ \ sim $ 0.01 \%)进行扩增。搜索结构是可转移和解释的,为PET方法的未来设计提供了建议和指导。
translated by 谷歌翻译
通过微调将大规模的预训练语言模型适应下游任务是实现NLP基准测试最先进性能的标准方法。然而,微调具有数百万或数十亿个参数的所有重量模型是对低资源设置中不稳定的采样低效,并且浪费,因为它需要为每个任务存储模型的单独副本。最近的工作已经开发了参数高效的微调方法,但这些方法仍然需要相对大量的参数或表现不足标准微调。在这项工作中,我们提出了一种特殊调整大型语言模型的方法,其在任务性能和比率参数之间具有更好的权衡的方法,而不是比上事先工作。 Compacter通过构建适配器,低级优化和参数化超复分乘法层的思想之上来实现这一目标。具体地,Compacter将特定于特定的权重矩阵插入到预估计模型的权重中,这些权重被有效地计算为共享的“慢速”权重和“快速”等级 - 每个Compacter层定义的矩阵之间的矩阵产品的总和。仅通过培训0.047%的预磨料模型的参数,Compacter会在胶水上标准微调和胜过标准微调的标准微调和低资源设置。我们的代码在〜\ url {https://github.com/rabeehk/compacter}上公开使用。
translated by 谷歌翻译
Fine-tuning a Pre-trained Language Model (PLM) on a specific downstream task has been a well-known paradigm in Natural Language Processing. However, with the ever-growing size of PLMs, training the entire model on several downstream tasks becomes very expensive and resource-hungry. Recently, different Parameter Efficient Tuning (PET) techniques are proposed to improve the efficiency of fine-tuning PLMs. One popular category of PET methods is the low-rank adaptation methods which insert learnable truncated SVD modules into the original model either sequentially or in parallel. However, low-rank decomposition suffers from limited representation power. In this work, we address this problem using the Kronecker product instead of the low-rank representation. We introduce KronA, a Kronecker product-based adapter module for efficient fine-tuning of Transformer-based PLMs. We apply the proposed methods for fine-tuning T5 on the GLUE benchmark to show that incorporating the Kronecker-based modules can outperform state-of-the-art PET methods.
translated by 谷歌翻译
尽管参数有效调整(PET)方法在自然语言处理(NLP)任务上显示出巨大的潜力,但其有效性仍然对计算机视觉(CV)任务的大规模转向进行了研究。本文提出了Conv-Adapter,这是一种专为CONCNET设计的PET模块。 Conv-Adapter具有轻巧的,可转让的域和架构,不合时宜,并且在不同的任务上具有广义性能。当转移下游任务时,Conv-Adapter将特定于任务的特征调制到主链的中间表示,同时保持预先训练的参数冻结。通过仅引入少量可学习的参数,例如,仅3.5%的RESNET50的完整微调参数,Conv-Adapter优于先前的宠物基线方法,并实现可比性或超过23个分类任务的全面调查的性能。它还在几乎没有分类的情况下表现出卓越的性能,平均利润率为3.39%。除分类外,Conv-Adapter可以推广到检测和细分任务,其参数降低了50%以上,但性能与传统的完整微调相当。
translated by 谷歌翻译
最近在各种领域中采用了关于下游任务的大型预训练模型。但是,更新大型预训练模型的整个参数集是昂贵的。尽管最近提出的参数效率转移学习(PETL)技术允许在预先训练的骨干网络内更新一小部分参数(例如,仅使用2%的参数)用于新任务,但它们只能通过最多减少训练记忆要求30%。这是因为可训练参数的梯度计算仍然需要通过大型预训练的骨干模型反向传播。为了解决这个问题,我们提出了梯子侧调(LST),这是一种新的PETL技术,可将训练记忆要求减少更多。与现有的参数效率方法不同,将其他参数插入骨干网络中,我们训练梯子侧网络,梯子侧网络是一个小而独立的网络,将中间激活作为通过快速连接(梯子)从骨干网络中获得的输入作为输入,并进行预测。 LST的内存要求明显低于以前的方法,因为它不需要通过骨干网络反向传播,而是仅通过侧网和梯子连接。我们使用NLP(胶)和视觉语言(VQA,GQA,NLVR2,MSCOCO)任务上的各种模型(T5,CLIP-T5)进行评估。 LST节省了69%的内存成本来微调整个网络,而其他方法仅将其中的26%保存在相似的参数使用中(因此,更多的内存节省了2.7倍)。此外,LST在低内存状态下的适配器和洛拉的精度高。为了进一步显示这种更好的记忆效率的优势,我们还将LST应用于较大的T5型号(T5-Large,T5-3B),比完整的微调和其他PETL方法获得更好的胶水性能。我们对VL任务的实验也完全相同。
translated by 谷歌翻译
及时调整是以参数有效的方式对预训练的预训练语言模型的新范式。在这里,我们探讨了超级核武器的使用来产生超预价:我们提出了HyperPrompt,这是一种用于迅速基于变形金刚自我注意的任务调节的新型体系结构。超预要是通过超网络通过一代人来学习的端到端。 HyperPrompt允许网络学习特定于任务的功能地图,其中超预告是要参与的查询的任务全局记忆,同时启用了任务之间的灵活信息共享。我们表明,HyperPrompt与强大的多任务学习基线具有竞争力,其额外的任务条件参数的$ 0.14 \%$ $ \%,实现了出色的参数和计算效率。通过广泛的经验实验,我们证明,超级启示可以比强大的T5多任务学习基准和参数效率高效的适配器变体获得卓越的性能,包括及时调整和SuplyFormer ++在许多模型尺寸的自然语言理解胶水和SuperGrue的基准上。
translated by 谷歌翻译
几乎没有射击的内在学习(ICL)使预训练的语言模型能够通过为输入的一部分提供少量的培训示例来执行以前的任务,而无需任何基于梯度的培训。 ICL会产生大量的计算,内存和存储成本,因为它每次进行预测时都涉及处理所有培训示例。参数有效的微调(PEFT)(例如,适配器模块,提示调谐,稀疏更新方法等)提供了替代范式,其中训练了一组少量参数以启用模型来执行新任务。在本文中,我们严格地比较了几个ICL和PEFT,并证明后者提供了更好的准确性,并大大降低了计算成本。在此过程中,我们引入了一种称为(IA)$^3 $的新PEFT方法,该方法通过学习的向量来扩展激活,从而获得更强的性能,同时仅引入相对少量的新参数。我们还提出了一个基于称为T-FEW的T0模型的简单食谱,可以将其应用于新任务,而无需特定于任务的调整或修改。我们通过将T-FEW应用于木筏基准,首次实现超人性能,并以6%的绝对性能优于最先进的方法来验证T-FEW对完全看不见的任务的有效性。我们实验中使用的所有代码均可公开使用。
translated by 谷歌翻译
微调下游任务的大型预训练语言模型已成为NLP中的事实上学习范式。然而,常规方法微调预先训练模型的所有参数,这变得越来越稳定,因为模型尺寸和增长的任务数量。最近的工作提出了各种参数有效的转移学习方法,只需微调少数(额外)参数以获得强大的性能。虽然有效,但各种方法中的成功和联系的关键成分尚不清楚。在本文中,我们分解了最先进的参数有效的传输学习方法的设计,并提出了一个在它们之间建立连接的统一框架。具体而言,我们将它们重新框架作为预先训练的模型对特定隐藏状态的修改,并定义了一组设计尺寸,不同的方法变化,例如计算修改的功能和应用修改的位置。通过跨机翻译的全面实证研究,文本摘要,语言理解和文本分类基准,我们利用统一的视图来确定以前的方法中的重要设计选择。此外,我们的统一框架使得能够在不同的方法中传输设计元素,因此我们能够实例化新的参数高效的微调方法,该方法比以前的方法更加有效,而是更有效,实现可比的结果在所有四个任务上调整所有参数。
translated by 谷歌翻译
Recent work has explored the potential to adapt a pre-trained vision transformer (ViT) by updating only a few parameters so as to improve storage efficiency, called parameter-efficient transfer learning (PETL). Current PETL methods have shown that by tuning only 0.5% of the parameters, ViT can be adapted to downstream tasks with even better performance than full fine-tuning. In this paper, we aim to further promote the efficiency of PETL to meet the extreme storage constraint in real-world applications. To this end, we propose a tensorization-decomposition framework to store the weight increments, in which the weights of each ViT are tensorized into a single 3D tensor, and their increments are then decomposed into lightweight factors. In the fine-tuning process, only the factors need to be updated and stored, termed Factor-Tuning (FacT). On VTAB-1K benchmark, our method performs on par with NOAH, the state-of-the-art PETL method, while being 5x more parameter-efficient. We also present a tiny version that only uses 8K (0.01% of ViT's parameters) trainable parameters but outperforms full fine-tuning and many other PETL methods such as VPT and BitFit. In few-shot settings, FacT also beats all PETL baselines using the fewest parameters, demonstrating its strong capability in the low-data regime.
translated by 谷歌翻译
巨大的预训练模型已成为自然语言处理(NLP)的核心,它是针对一系列下游任务进行微调的起点。然而,此范式的两个疼痛点持续:(a)随着预训练的模型的增长越大(例如,GPT-3的175b参数),即使是微调过程也可能是耗时的,并且计算昂贵; (b)默认情况下,微调模型的大小与起点相同,由于其更专业的功能,这既不明智,也不是实际的,因为许多微调模型将部署在资源受限的环境中。为了解决这些疼痛点,我们通过在重量更新和最终模型权重中利用稀疏性来提出一个用于资源和参数有效的微调的框架。我们提出的框架被称为双重稀疏性的有效调整(DSEE),旨在实现两个关键目标:(i)参数有效的微调 - 通过在预训练的权重的顶部强制实施稀疏性的低级更新; (ii)资源有效的推论 - 通过鼓励对最终微调模型的稀疏重量结构。我们通过统一的方法在预训练的语言模型中利用非结构化和结构化的稀疏模式来利用这两个方向的稀疏性。广泛的实验和深入研究,对数十个数据集进行了不同的网络骨干(即Bert,Roberta和GPT-2),始终显示出令人印象深刻的参数 - /推理效率,同时保持竞争性下游性能。例如,DSEE在达到可比性能的同时节省了约25%的推理拖失lo,在BERT上具有0.5%的可训练参数。代码可在https://github.com/vita-group/dsee中找到。
translated by 谷歌翻译
最近,在大型文本语料库上预先培训的微调语言模型已经为Vision-and Langual(V&L)任务以及纯语言任务提供了巨大的改进。但是,微调预训练模型的整个参数集变得不切实际,因为模型大小正在快速增长。因此,在本文中,我们将基于适配器的参数高效转移学习技术引入VL-BART和VL-T5等V&L型号。我们在四个不同V&L任务的统一多任务设置中评估我们的方法:VQAV2,GQA,NLVR2和MSCOCO图像标题。通过仔细的培训和彻底的实验,我们将三种流行的基于适配器的方法(适配器,Hyperformer,Compacter)基准,抵御标准的全部微调和最近提出的及时调整方法。我们还通过分享其权重以获得跨任务的知识来增强适配器的效率和性能。我们的结果表明,使用权重共享技术(总参数的4.4%)培训适配器可以匹配微调整个模型的性能。最后,我们提出了一个全面的分析,包括适配器和任务特定提示的组合以及V&L对适配器进行培训的影响。我们的代码可用于:https://github.com/ylsung/vl_adapter。
translated by 谷歌翻译
This work introduces a new multi-task, parameter-efficient language model (LM) tuning method that learns to transfer knowledge across different tasks via a mixture of soft prompts-small prefix embedding vectors pre-trained for different tasks. Our method, called ATTEMPT (ATTEntional Mixtures of Prompt Tuning), obtains source prompts as encodings of large-scale source tasks into a small number of parameters and trains an attention module to interpolate the source prompts and a newly initialized target prompt for every instance in the target task. During training, only the target task prompt and the attention weights, which are shared between tasks in multi-task training, are updated, while the original LM and source prompts are intact. ATTEMPT is highly parameter-efficient (e.g., updates 2,300 times fewer parameters than full fine-tuning) while achieving high task performance using knowledge from high-resource tasks. Moreover, it is modular using pre-trained soft prompts, and can flexibly add or remove source prompts for effective knowledge transfer. Our experimental results across 21 diverse NLP datasets show that ATTEMPT significantly outperforms prompt tuning and outperforms or matches fully fine-tuned or other parameter-efficient tuning approaches that use over ten times more parameters. Finally, ATTEMPT outperforms previous work in few-shot learning settings.
translated by 谷歌翻译
Conventional fine-tuning encounters increasing difficulties given the size of current Pre-trained Language Models, which makes parameter-efficient tuning become the focal point of frontier research. Previous methods in this field add tunable adapters into MHA or/and FFN of Transformer blocks to enable PLMs achieve transferability. However, as an important part of Transformer architecture, the power of layer normalization for parameter-efficent tuning is ignored. In this paper, we first propose LN-tuning, by tuning the gain and bias term of Layer Normalization module with only 0.03\% parameters, which is of high time-efficency and significantly superior to baselines which are less than 0.1\% tunable parameters. Further, we study the unified framework of combining LN-tuning with previous ones and we find that: (1) the unified framework of combining prefix-tuning, the adapter-based method working on MHA, and LN-tuning achieves SOTA performance. (2) unified framework which tunes MHA and LayerNorm simultaneously can get performance improvement but those which tune FFN and LayerNorm simultaneous will cause performance decrease. Ablation study validates LN-tuning is of no abundant parameters and gives a further understanding of it.
translated by 谷歌翻译
当前的Modus Operandi在改编预训练的模型中涉及更新所有骨干参数,即,完整的微调。本文介绍了视觉及时调整(VPT),作为视觉中大规模变压器模型的全面微调的有效替代方案。VPT从最近有效地调整大型语言模型的最新进展中汲取灵感,在输入空间中仅引入了少量的可训练参数(少于模型参数),同时保持模型骨架冻结。通过对各种下游识别任务的广泛实验,我们表明VPT与其他参数有效调整协议相比获得了显着的性能增长。最重要的是,在许多情况下,VPT甚至在模型能力和培训数据量表的许多情况下都胜过全面的微调,同时降低了每任务的存储成本。
translated by 谷歌翻译
在过去的几年中,视觉模型的规模呈指数增长,尤其是在视觉变压器出现之后。这激发了参数有效调整方法的开发,例如学习适配器层或视觉及时令牌,这允许训练一小部分模型参数,而从预训练中获得的绝大多数则可以冷冻。但是,设计适当的调整方法是不平凡的:可能需要尝试冗长的设计选择列表,更不用说每个下游数据集通常都需要自定义设计。在本文中,我们将现有的参数效率调整方法视为“及时模块”,并提出了神经及时搜索(Noah),这是一种新颖的方法,可以学习大型视觉模型,通过神经体系结构搜索算法的及时模型的最佳设计, ,专门针对每个下游数据集。通过对20多个视觉数据集进行广泛的实验,我们证明了Noah(i)优于单个提示模块,(ii)具有良好的少数学习能力,并且(iii)可以域名。代码和型号可在https://github.com/davidzhangyuanhan/noah上找到。
translated by 谷歌翻译
最近的参数效率语言模型调整(PELT)方法可以使微调的性能与较少的可训练参数相匹配,并且在训练数据受到限制时尤其表现良好。但是,不同的PELT方法在相同的任务上的性能可能会有所不同,因此为特定任务选择最合适的方法是不平凡的,尤其是考虑到快速增长的新PELT方法和任务。鉴于模型多样性和模型选择的难度,我们提出了一个统一的框架Unipelt,该框架将不同的毛皮方法纳入了子模型,并学会了激活最适合当前数据或通过门控机制设置的方法。在胶水基准上,与最佳的单个毛皮方法相比,UniPelt始终达到1〜4%的增长,而其融合甚至超过了不同设置下的微调。此外,UniPelt通常超过上限,该上限在每个任务上单独使用的所有子模型的最佳性能,表明多种PELT方法的混合物可能本质上比单个方法更有效。
translated by 谷歌翻译
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers a significant accuracy drop compared to the full fine-tuning. In this paper, we propose a new parameter-efficient fine-tuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance of full fine-tuning. In this way, SSF also surprisingly outperforms other parameter-efficient fine-tuning approaches even with a smaller number of tunable parameters. Furthermore, different from some existing parameter-efficient fine-tuning methods (e.g., Adapter or VPT) that introduce the extra parameters and computational cost in the training and inference stages, SSF only adds learnable parameters during the training stage, and these additional parameters can be merged into the original pre-trained model weights via re-parameterization in the inference phase. With the proposed SSF, our model obtains 2.46% (90.72% vs. 88.54%) and 11.48% (73.10% vs. 65.57%) performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared to the full fine-tuning but only fine-tuning about 0.3M parameters. We also conduct amounts of experiments in various model families (CNNs, Transformers, and MLPs) and datasets. Results on 26 image classification datasets in total and 3 robustness & out-of-distribution datasets show the effectiveness of SSF. Code is available at https://github.com/dongzelian/SSF.
translated by 谷歌翻译
With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at \url{https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning}.
translated by 谷歌翻译
In computer vision, it has achieved great transfer learning performance via adapting large-scale pretrained vision models (e.g., vision transformers) to downstream tasks. Common approaches for model adaptation either update all model parameters or leverage linear probes. In this paper, we aim to study parameter-efficient model adaptation strategies for vision transformers on the image classification task. We formulate efficient model adaptation as a subspace training problem and perform a comprehensive benchmarking over different efficient adaptation methods. We conduct an empirical study on each efficient model adaptation method focusing on its performance alongside parameter cost. Furthermore, we propose a parameter-efficient model adaptation framework, which first selects submodules by measuring local intrinsic dimensions and then projects them into subspace for further decomposition via a novel Kronecker Adaptation (KAdaptation) method. We analyze and compare our method with a diverse set of baseline model adaptation methods (including state-of-the-art methods for pretrained language models). Our method performs the best in terms of the tradeoff between accuracy and parameter efficiency across 20 image classification datasets under the few-shot setting and 7 image classification datasets under the full-shot setting.
translated by 谷歌翻译
激活功能可以对降低输入数据的拓扑复杂性产生重大影响,从而提高模型的性能。选择合适的激活函数是神经模型设计中的重要步骤。但是,在基于变压器的语言模型中很少讨论或探索激活功能的选择。事先选择它们的激活功能,然后从预训练中固定到微调。结果,在这个漫长的生命周期中,无法调整它们对模型的电感偏见。此外,随后开发的模型(例如Roberta,Bart和GPT-3)经常跟进先前的工作(例如BERT),以使用相同的激活函数而无需合理。在本文中,我们研究了变压器体系结构中使用理性激活函数(RAF)(RAF)的有效性。与常规,预定义的激活功能相反,RAF可以根据输入数据自适应地学习最佳激活功能。我们的实验表明,基于RAF的变压器(RAFT)比具有GELU函数的香草BERT的验证性更低。我们进一步评估了低和全数据设置中下游任务的筏。我们的结果表明,筏在大多数任务和设置上都优于对应模型。例如,在低数据表情况下(有100个训练示例),木筏在胶水基准上的表现平均高出5.71点,在全数据设置的小队中,平均得分为2.05分。对学到的RAF的形状的分析进一步揭示了它们在预训练模型的不同层之间有很大的变化,并且看起来与常规激活函数大多不同。 RAFT为根据学习的激活功能打开了一个新的研究方向,用于分析和解释预训练的模型。
translated by 谷歌翻译