最近的研究提出了一系列针对深度任务模型的专业优化算法。通常声称这些多任务优化(MTO)方法产生的解决方案优于仅通过优化任务损失的加权平均值而获得的解决方案。在本文中,我们对各种语言和视觉任务进行大规模实验,以检查这些主张的经验有效性。我们表明,尽管这些算法的设计和计算复杂性增加了,但MTO方法并未产生超出传统优化方法可实现的性能的任何改进。我们强调了替代策略,这些策略始终如一地提高性能概况,并指出可能导致次优效果的常见训练陷阱。最后,我们概述了可靠地评估MTO算法的性能并讨论潜在解决方案的挑战。
translated by 谷歌翻译
最近的多任务学习研究旨在反对单一的标准化,其中培训只需最大限度地减少任务损失的总和。代替了几种Ad-hoc多任务优化算法,它受到各种假设的启发,关于使多任务设置困难的原因。这些优化器中的大多数都需要每个任务渐变,并引入重要的内存,运行时和实现开销。我们提出了一个理论分析,表明许多专业的多任务优化器可以被解释为正规化的形式。此外,我们表明,当与单任务学习的标准正则化和稳定技术耦合时,单一的标定化匹配或改善在监督和加固学习设置中复杂的多任务优化器的性能。我们相信我们的结果要求对该地区最近的研究进行关键重新评估。
translated by 谷歌翻译
培训越来越多的语言模型的最新趋势已大大提高了语言任务的机器学习绩效。但是,培训较大模型的巨大成本可以使他们过高地调整它们的昂贵,从而激发了对更有效方法的研究。基于梯度的高参数优化提供了在训练期间调整超参数的能力,但以前尚未以序列到序列设置进行研究。我们首次将基于梯度的简单和一般基于梯度的高参数优化方法应用于顺序到序列任务,证明了效率和性能在强大的基线上的神经机器翻译和自然语言理解(NLU)任务(通过T5预测) )。对于翻译,我们显示该方法跨语言对,比贝叶斯高参数优化更有效,并且某些超参数的学习时间表可以超过最佳的恒定值调整。对于T5,我们表明在预训练期间学习超参数可以提高下游NLU任务的性能。当同时学习多个超参数时,我们表明,全球学习率可以遵循训练的时间表,以提高性能,并且无法通过贪婪方法的“短马偏见”来解释。我们发布用于促进进一步研究的代码。
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
多任务学习(MTL)在各种领域取得了巨大的成功,但是如何平衡不同的任务以避免负面影响仍然是一个关键问题。为实现任务平衡,存在许多有效的工作来平衡任务丢失或渐变。在本文中,我们统一了八个代表性的任务平衡方法,从损失加权的角度统一,并提供一致的实验比较。此外,我们令人惊讶地发现,培训具有从分配中采样的随机重量的MTL模型可以实现与最先进的基线相比的性能。基于此发现,我们提出了一种称为随机损失加权(RLW)的简单且有效的加权策略,其可以仅在现有工作中仅​​在一个附加的代码中实现。从理论上讲,我们分析了RLW的融合,并揭示了RLW的概率比具有固定任务权重的现有模型逃脱局部最小值,从而产生更好的概括能力。经验上,我们在六个图像数据集中广泛评估了所提出的RLW方法,以及来自Xtreme基准测试的四个多语言任务,以显示与最先进的策略相比所提出的RLW战略的有效性。
translated by 谷歌翻译
多语种模型是参数效率,特别是通过利用Crosslingual Transcer来改善低资源语言。尽管最近有巨大的多语言翻译预先推出了越来越多的模型和数据,但如何有效地培养多语言模型并未得到很好的理解。在本文中,我们表明,多语言训练中的常见情况,语言之间的数据不平衡,高资源和低资源语言之间的优化张力,其中发现的多语言解决方案通常是低资源的次优。我们展示了普通培训方法,upsamples低资源无法鲁布利地优化人口损失,其中风险耗材或过度为低资源的风险。绘制最近关于损失景观几何学的发现及其对泛化的影响,提出了一个原则性的优化算法,曲率意识的任务缩放(CAT),其自适应地从不同任务中重新加强了对低曲率的多语言训练的元的梯度。邻居对所有语言均匀低损失。我们在共同基准(TED,WMT和OPUS-100)上进行了实验,具有不同程度的数据不平衡。猫有效地改善了多语言优化,结果表明,在低资源($ + 0.8 $至+ 2.2 $ BLEU)上展示了一致的收益,而不会伤害高资源。此外,猫对过度分数计量和大量批量训练具有强大的稳健性,这使得这是一种充满希望的大量多语言模型,真正提高低资源语言。
translated by 谷歌翻译
Multilingual machine translation models can benefit from synergy between different language pairs, but also suffer from interference. While there is a growing number of sophisticated methods that aim to eliminate interference, our understanding of interference as a phenomenon is still limited. This work identifies the main factors that contribute to interference in multilingual machine translation. Through systematic experimentation, we find that interference (or synergy) are primarily determined by model size, data size, and the proportion of each language pair within the total dataset. We observe that substantial interference occurs mainly when the model is very small with respect to the available training data, and that using standard transformer configurations with less than one billion parameters largely alleviates interference and promotes synergy. Moreover, we show that tuning the sampling temperature to control the proportion of each language pair in the data is key to balancing the amount of interference between low and high resource language pairs effectively, and can lead to superior performance overall.
translated by 谷歌翻译
In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. A common compromise is to optimize a proxy objective that minimizes a weighted linear combination of pertask losses. However, this workaround is only valid when the tasks do not compete, which is rarely the case. In this paper, we explicitly cast multi-task learning as multi-objective optimization, with the overall objective of finding a Pareto optimal solution. To this end, we use algorithms developed in the gradient-based multiobjective optimization literature. These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks. We therefore propose an upper bound for the multi-objective loss and show that it can be optimized efficiently. We further prove that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions. We apply our method to a variety of multi-task deep learning problems including digit classification, scene understanding (joint semantic segmentation, instance segmentation, and depth estimation), and multilabel classification. Our method produces higher-performing models than recent multi-task learning formulations or per-task training.
translated by 谷歌翻译
训练有素的神经网络的性能至关重要。加上深度学习模型的不断增长的规模,这种观察激发了对学习稀疏模型的广泛研究。在这项工作中,我们专注于控制稀疏学习时的稀疏水平的任务。基于稀疏性惩罚的现有方法涉及对罚款因素的昂贵反复试验调整,因此缺乏直接控制所得模型的稀疏性。作为响应,我们采用了一个约束的公式:使用Louizos等人提出的栅极机制。 (2018年),我们制定了一个受约束的优化问题,其中稀疏以训练目标和所需的稀疏目标以端到端的方式指导。使用WIDERESNET和RESNET {18,50}模型进行了CIFAR-10/100,Tinyimagenet和ImageNet的实验验证了我们的提案的有效性,并证明我们可以可靠地实现预定的稀疏目标,而不会损害预测性能。
translated by 谷歌翻译
在多任务学习(MTL)中,对联合模型进行了培训,可以同时对几个任务进行预测。联合培训降低了计算成本并提高数据效率;但是,由于这些不同任务的梯度可能需要冲突,因此训练MTL的联合模型通常比其相应的单任务对应人员产生的性能较低。减轻此问题的一种常见方法是使用特定的启发式方法将每个任务梯度组合到联合更新方向上。在本文中,我们建议将梯度组合步骤视为一个议价游戏,在该游戏中,任务就达成了有关参数更新联合方向的协议。在某些假设下,议价问题具有独特的解决方案,称为NASH讨价还价解决方案,我们建议将其用作多任务学习的原则方法。我们描述了一种新的MTL优化程序NASH-MTL,并为其收敛性得出了理论保证。从经验上讲,我们表明NASH-MTL在各个域中的多个MTL基准上实现了最新的结果。
translated by 谷歌翻译
超参数优化构成了典型的现代机器学习工作流程的很大一部分。这是由于这样一个事实,即机器学习方法和相应的预处理步骤通常只有在正确调整超参数时就会产生最佳性能。但是在许多应用中,我们不仅有兴趣仅仅为了预测精度而优化ML管道;确定最佳配置时,必须考虑其他指标或约束,从而导致多目标优化问题。由于缺乏知识和用于多目标超参数优化的知识和容易获得的软件实现,因此通常在实践中被忽略。在这项工作中,我们向读者介绍了多个客观超参数优化的基础知识,并激励其在应用ML中的实用性。此外,我们从进化算法和贝叶斯优化的领域提供了现有优化策略的广泛调查。我们说明了MOO在几个特定ML应用中的实用性,考虑了诸如操作条件,预测时间,稀疏,公平,可解释性和鲁棒性之类的目标。
translated by 谷歌翻译
差异隐私(DP)提供了正式的隐私保证,以防止对手可以访问机器学习模型,从而从提取有关单个培训点的信息。最受欢迎的DP训练方法是差异私有随机梯度下降(DP-SGD),它通过在训练过程中注入噪声来实现这种保护。然而,以前的工作发现,DP-SGD通常会导致标准图像分类基准的性能显着降解。此外,一些作者假设DP-SGD在大型模型上固有地表现不佳,因为保留隐私所需的噪声规范与模型维度成正比。相反,我们证明了过度参数化模型上的DP-SGD可以比以前想象的要好得多。将仔细的超参数调整与简单技术结合起来,以确保信号传播并提高收敛速率,我们获得了新的SOTA,而没有额外数据的CIFAR-10,在81.4%的81.4%下(8,10^{ - 5}) - 使用40 -layer wide-Resnet,比以前的SOTA提高了71.7%。当对预训练的NFNET-F3进行微调时,我们在ImageNet(0.5,8*10^{ - 7})下达到了83.8%的TOP-1精度。此外,我们还在(8,8 \ cdot 10^{ - 7})下达到了86.7%的TOP-1精度,DP仅比当前的非私人SOTA仅4.3%。我们认为,我们的结果是缩小私人图像分类和非私有图像分类之间准确性差距的重要一步。
translated by 谷歌翻译
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank.
translated by 谷歌翻译
Multi-task learning (MTL) models have demonstrated impressive results in computer vision, natural language processing, and recommender systems. Even though many approaches have been proposed, how well these approaches balance different tasks on each parameter still remains unclear. In this paper, we propose to measure the task dominance degree of a parameter by the total updates of each task on this parameter. Specifically, we compute the total updates by the exponentially decaying Average of the squared Updates (AU) on a parameter from the corresponding task.Based on this novel metric, we observe that many parameters in existing MTL methods, especially those in the higher shared layers, are still dominated by one or several tasks. The dominance of AU is mainly due to the dominance of accumulative gradients from one or several tasks. Motivated by this, we propose a Task-wise Adaptive learning rate approach, AdaTask in short, to separate the \emph{accumulative gradients} and hence the learning rate of each task for each parameter in adaptive learning rate approaches (e.g., AdaGrad, RMSProp, and Adam). Comprehensive experiments on computer vision and recommender system MTL datasets demonstrate that AdaTask significantly improves the performance of dominated tasks, resulting SOTA average task-wise performance. Analysis on both synthetic and real-world datasets shows AdaTask balance parameters in every shared layer well.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
在他们的损失景观方面观看神经网络模型在学习的统计力学方法方面具有悠久的历史,并且近年来它在机器学习中得到了关注。除此之外,已显示局部度量(例如损失景观的平滑度)与模型的全局性质(例如良好的泛化性能)相关联。在这里,我们对数千个神经网络模型的损失景观结构进行了详细的实证分析,系统地改变了学习任务,模型架构和/或数据数量/质量。通过考虑试图捕获损失景观的不同方面的一系列指标,我们证明了最佳的测试精度是如下:损失景观在全球连接;训练型模型的集合彼此更像;而模型会聚到局部平滑的地区。我们还表明,当模型很小或培训以较低质量数据时,可以出现全球相连的景观景观;而且,如果损失景观全球相连,则培训零损失实际上可以导致更糟糕的测试精度。我们详细的经验结果阐明了学习阶段的阶段(以及后续双重行为),基本与偶然的决定因素良好的概括决定因素,负载样和温度相同的参数在学习过程中,不同的影响对模型的损失景观的影响不同和数据,以及地方和全球度量之间的关系,近期兴趣的所有主题。
translated by 谷歌翻译
我们提出了一个统一的查看,即通过通用表示,一个深层神经网络共同学习多个视觉任务和视觉域。同时学习多个问题涉及最大程度地减少具有不同幅度和特征的多个损失函数的加权总和,从而导致一个损失的不平衡状态,与学习每个问题的单独模型相比,一个损失的不平衡状态主导了优化和差的结果。为此,我们提出了通过小容量适配器将多个任务/特定于域网络的知识提炼到单个深神经网络中的知识。我们严格地表明,通用表示在学习NYU-V2和CityScapes中多个密集的预测问题方面实现了最新的表现,来自视觉Decathlon数据集中的不同域中的多个图像分类问题以及MetadataSet中的跨域中的几个域中学习。最后,我们还通过消融和定性研究进行多次分析。
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译