广泛观察到的神经缩放定律,其中错误是训练集大小,模型大小或两者兼而有之的误差,从而促进了深度学习的实质性改进。但是,仅通过缩放来进行这些改进就需要计算和能源成本相当大。在这里,我们专注于数据集大小的错误缩放,并展示在理论和实践中如何超越幂律的扩展,并将其减少到指数缩放,如果我们可以访问高质量的数据修剪指标,以将顺序排名为应该丢弃哪些培训示例以实现任何修剪的数据集大小。然后,我们通过经验修剪的数据集大小来测试这一新的指数缩放预测,并且实际上观察到了在CIFAR-10,SVHN和Imagenet训练的重新NET上的功率定律缩放性能。鉴于找到高质量的修剪指标的重要性,我们对ImageNet上十个不同的数据修剪指标进行了第一个大规模的基准测试研究。我们发现大多数现有的高性能指标尺寸较差,而对于ImageNet来说,最佳尺度是计算密集型的,并且需要为每个图像标签。因此,我们开发了一种新的简单,便宜和可扩展的自我监督的修剪指标,该指标与最佳监督指标相当。总体而言,我们的工作表明,发现良好的数据指标可能会为可行的途径提供可行的途径,从而大大改善神经缩放法律,从而降低现代深度学习的资源成本。
translated by 谷歌翻译
Learning curves provide insight into the dependence of a learner's generalization performance on the training set size. This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning. This review recounts the origins of the term, provides a formal definition of the learning curve, and briefly covers basics such as its estimation. Our main contribution is a comprehensive overview of the literature regarding the shape of learning curves. We discuss empirical and theoretical evidence that supports well-behaved curves that often have the shape of a power law or an exponential. We consider the learning curves of Gaussian processes, the complex shapes they can display, and the factors influencing them. We draw specific attention to examples of learning curves that are ill-behaved, showing worse learning performance with more training data. To wrap up, we point out various open problems that warrant deeper empirical and theoretical investigation. All in all, our review underscores that learning curves are surprisingly diverse and no universal model can be identified.
translated by 谷歌翻译
人工智能的最新趋势是将验证的模型用于语言和视觉任务,这些模型已经实现了非凡的表现,但也令人困惑。因此,以各种方式探索这些模型的能力对该领域至关重要。在本文中,我们探讨了模型的可靠性,在其中我们将可靠的模型定义为一个不仅可以实现强大的预测性能,而且在许多涉及不确定性(例如选择性预测,开放式设置识别)的决策任务上,在许多决策任务上表现出色,而且表现良好。强大的概括(例如,准确性和适当的评分规则,例如在分布数据集中和分发数据集上的对数可能性)和适应性(例如,主动学习,几乎没有射击不确定性)。我们设计了40个数据集的10种任务类型,以评估视觉和语言域上可靠性的不同方面。为了提高可靠性,我们分别开发了VIT-PLEX和T5-PLEX,分别针对视觉和语言方式扩展了大型模型。 PLEX极大地改善了跨可靠性任务的最先进,并简化了传统协议,因为它可以改善开箱即用的性能,并且不需要设计分数或为每个任务调整模型。我们演示了高达1B参数的模型尺寸的缩放效果,并预处理数据集大小最多4B示例。我们还展示了PLEX在具有挑战性的任务上的功能,包括零射门的开放式识别,主动学习和对话语言理解中的不确定性。
translated by 谷歌翻译
标签 - 不平衡和组敏感分类中的目标是优化相关的指标,例如平衡错误和相同的机会。经典方法,例如加权交叉熵,在训练深网络到训练(TPT)的终端阶段时,这是超越零训练误差的训练。这种观察发生了最近在促进少数群体更大边值的直观机制之后开发启发式替代品的动力。与之前的启发式相比,我们遵循原则性分析,说明不同的损失调整如何影响边距。首先,我们证明,对于在TPT中训练的所有线性分类器,有必要引入乘法,而不是添加性的Logit调整,以便对杂项边缘进行适当的变化。为了表明这一点,我们发现将乘法CE修改的连接到成本敏感的支持向量机。也许是违反,我们还发现,在培训开始时,相同的乘法权重实际上可以损害少数群体。因此,虽然在TPT中,添加剂调整无效,但我们表明它们可以通过对乘法重量的初始负效应进行抗衡来加速会聚。通过这些发现的动机,我们制定了矢量缩放(VS)丢失,即捕获现有技术作为特殊情况。此外,我们引入了对群体敏感分类的VS损失的自然延伸,从而以统一的方式处理两种常见类型的不平衡(标签/组)。重要的是,我们对最先进的数据集的实验与我们的理论见解完全一致,并确认了我们算法的卓越性能。最后,对于不平衡的高斯 - 混合数据,我们执行泛化分析,揭示平衡/标准错误和相同机会之间的权衡。
translated by 谷歌翻译
大规模预训练的快速开发导致基础模型可以充当各种下游任务和领域的有效提取器。在此激励的情况下,我们研究了预训练的视觉模型的功效,作为下游持续学习(CL)场景的基础。我们的目标是双重的。首先,我们想了解RAW-DATA空间中CL和预训练编码器的潜在空间之间CL之间的计算准确性权衡。其次,我们研究编码器的特征,训练算法和数据以及所得的潜在空间如何影响CL性能。为此,我们将各种预训练的模型在大规模基准测试方案中的功效与在潜在和原始数据空间中应用的香草重播设置的功效。值得注意的是,这项研究表明了转移,遗忘,任务相似性和学习如何取决于输入数据特征,而不一定取决于CL算法。首先,我们表明,在某些情况下,通过可忽略的计算中的非参数分类器可以很容易地实现合理的CL性能。然后,我们展示模型如何在更广泛的数据上进行预训练,从而为各种重播大小提供更好的性能。我们以这些表示形式的代表性相似性和传递属性来解释这一点。最后,与训练域相比,我们显示了自我监督预训练对下游域的有效性。我们指出并验证了几个研究方向,这些方向可以进一步提高潜在CL的功效,包括表示结合。本研究中使用的各种数据集可以用作进一步CL研究的计算效率游乐场。该代码库可在https://github.com/oleksost/latent_cl下获得。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
Many recent works on understanding deep learning try to quantify how much individual data instances influence the optimization and generalization of a model, either by analyzing the behavior of the model during training or by measuring the performance gap of the model when the instance is removed from the dataset. Such approaches reveal characteristics and importance of individual instances, which may provide useful information in diagnosing and improving deep learning. However, most of the existing works on data valuation require actual training of a model, which often demands high-computational cost. In this paper, we provide a training-free data valuation score, called complexity-gap score, which is a data-centric score to quantify the influence of individual instances in generalization of two-layer overparameterized neural networks. The proposed score can quantify irregularity of the instances and measure how much each data instance contributes in the total movement of the network parameters during training. We theoretically analyze and empirically demonstrate the effectiveness of the complexity-gap score in finding 'irregular or mislabeled' data instances, and also provide applications of the score in analyzing datasets and diagnosing training dynamics.
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译
有效地近似损失函数的局部曲率信息是用于深神经网络的优化和压缩的关键工具。然而,大多数现有方法近似二阶信息具有高计算或存储成本,这可以限制其实用性。在这项工作中,我们调查矩阵,用于估计逆象征的矢量产品(IHVPS)的矩阵线性时间方法,因为当Hessian可以近似为乘语 - 一个矩阵的总和时,如Hessian的经典近似由经验丰富的Fisher矩阵。我们提出了两个新的算法作为称为M-FAC的框架的一部分:第一个算法朝着网络压缩量身定制,如果Hessian给出了M $等级的总和,则可以计算Dimension $ D $的IHVP。 ,使用$ O(DM ^ 2)$预压制,$ O(DM)$代价计算IHVP,并查询逆Hessian的任何单个元素的费用$ O(m)$。第二算法针对优化设置,我们希望在反向Hessian之间计算产品,估计在优化步骤的滑动窗口和给定梯度方向上,根据预先说明的SGD所需的梯度方向。我们为计算IHVP和OHVP和O(DM + M ^ 3)$ of $ o(dm + m ^ 2)$提供算法,以便从滑动窗口添加或删除任何渐变。这两种算法产生最先进的结果,用于网络修剪和相对于现有二阶方法的计算开销的优化。在[9]和[17]可用实现。
translated by 谷歌翻译
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that SSL algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and performance can degrade substantially when the unlabeled dataset contains out-ofdistribution examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available. 2 * Equal contribution 2 https://github.com/brain-research/realistic-ssl-evaluation 32nd Conference on Neural Information Processing Systems (NeurIPS 2018),
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
懒惰培训制度中的神经网络收敛到内核机器。在丰富的特征学习制度中可以在丰富的特征学习制度中可以使用数据依赖性内核来学习内核机器吗?我们证明,这可以是由于我们术语静音对准的现象,这可能需要网络的切线内核在特征内演变,而在小并且在损失明显降低,并且之后仅在整体尺度上生长。我们表明这种效果在具有小初始化和白化数据的同质神经网络中进行。我们在线性网络壳体提供了对这种效果的分析处理。一般来说,我们发现内核在训练的早期阶段开发了低级贡献,然后在总体上发展,产生了与最终网络的切线内核的内核回归解决方案等同的函数。内核的早期光谱学习取决于深度。我们还证明了非白化数据可以削弱无声的对准效果。
translated by 谷歌翻译
Web爬行的数据集已在最近的图像文本模型(例如剪辑(对比语言图像预训练)或火烈鸟)中启用了非凡的概括功能,但是对数据集创建过程知之甚少。在这项工作中,我们介绍了六个可公开可用数据源的测试床 - YFCC,LAION,概念标题,机智,redcaps,shutterstock-,以调查预训练分布如何在剪辑中诱导稳健性。我们发现,预训练数据的性能在分布变化之间有很大的变化,没有单个数据源主导。此外,我们系统地研究了这些数据源之间的相互作用,发现组合多个来源并不一定会产生更好的模型,而是稀释了最佳个体数据源的鲁棒性。我们将经验发现与简单环境中的理论见解相辅相成,其中结合训练数据还会导致稳健性稀释。此外,我们的理论模型为LAION数据集中最近采用的基于夹的数据过滤技术的成功提供了候选解释。总体而言,我们的结果表明,仅仅从Web中收集大量数据并不是建立预训练数据集以进行鲁棒性概括的最有效方法,因此需要进一步研究数据集设计。
translated by 谷歌翻译
深入学习的成功已归功于大量数据培训大量的过度公正模型。随着这种趋势的继续,模型培训已经过分昂贵,需要获得强大的计算系统来培训最先进的网络。一大堆研究已经致力于通过各种模型压缩技术解决训练的迭代的成本,如修剪和量化。花费较少的努力来定位迭代的数量。以前的工作,例如忘记得分和宏伟/ el2n分数,通过识别完整数据集中的重要样本并修剪剩余的样本来解决这个问题,从而减少每时代的迭代。虽然这些方法降低了训练时间,但它们在训练前使用昂贵的静态评分算法。在计入得分机制时,通常会增加总运行时间。在这项工作中,我们通过动态数据修剪算法解决了这种缺点。令人惊讶的是,我们发现均匀的随机动态修剪可以以积极的修剪速率更优于现有的工作。我们将其归因于存在“有时”样本 - 对学习决策边界很重要的点,只有一些培训时间。为了更好地利用有时样本的微妙性,我们提出了基于加强学习技术的两种算法,以动态修剪样本并实现比随机动态方法更高的准确性。我们针对全数据集基线和CIFAR-10和CIFAR-100上的先前工作测试所有方法,我们可以将培训时间降低到2倍,而无明显的性能损失。我们的结果表明,数据修剪应理解为与模型的训练轨迹密切相关的动态过程,而不是仅基于数据集的静态步骤。
translated by 谷歌翻译
The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.
translated by 谷歌翻译
Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has primarily used private data \& models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training distribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to ensure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study will be available at https://github.com/LAION-AI/scaling-laws-openclip
translated by 谷歌翻译
强大的机器学习模型的开发中的一个重要障碍是协变量的转变,当训练和测试集的输入分布时发生的分配换档形式在条件标签分布保持不变时发生。尽管现实世界应用的协变量转变普遍存在,但在现代机器学习背景下的理论理解仍然缺乏。在这项工作中,我们检查协变量的随机特征回归的精确高尺度渐近性,并在该设置中提出了限制测试误差,偏差和方差的精确表征。我们的结果激发了一种自然部分秩序,通过协变速转移,提供足够的条件来确定何时何时损害(甚至有助于)测试性能。我们发现,过度分辨率模型表现出增强的协会转变的鲁棒性,为这种有趣现象提供了第一个理论解释之一。此外,我们的分析揭示了分销和分发外概率性能之间的精确线性关系,为这一令人惊讶的近期实证观察提供了解释。
translated by 谷歌翻译
在实现最先进的性能和在实际应用中负担得起的大型模型之间,计算机视觉的差异越来越大。在本文中,我们解决了这个问题,并显着弥合了这两种模型之间的差距。在我们的实证研究中,我们不一定要提出一种新方法,而是要努力确定一个可靠的有效食谱,以使最先进的大型模型在实践中负担得起。我们证明,当正确执行时,知识蒸馏可以成为减少大型尺寸而不损害其性能的强大工具。特别是,我们发现存在某些隐式设计选择,这可能会严重影响蒸馏的有效性。我们的关键贡献是对这些设计选择的明确识别,这些选择以前在文献中尚未阐明。我们通过一项全面的实证研究备份了我们的发现,在广泛的视觉数据集上展示了令人信服的结果,尤其是获得了最先进的Imagenet Resnet-50模型,该模型可实现82.8%的Top-1准确性。 。
translated by 谷歌翻译
我们研究了用于半监控学习(SSL)的无监督数据选择,其中可以提供大规模的未标记数据集,并且为标签采集预算小额数据子集。现有的SSL方法专注于学习一个有效地集成了来自给定小标记数据和大型未标记数据的信息的模型,而我们专注于选择正确的数据以用于SSL的注释,而无需任何标签或任务信息。直观地,要标记的实例应统称为下游任务的最大多样性和覆盖范围,并且单独具有用于SSL的最大信息传播实用程序。我们以三步数据为中心的SSL方法形式化这些概念,使稳定性和精度的纤维液改善8%的CiFar-10(标记为0.08%)和14%的Imagenet -1k(标记为0.2%)。它也是一种具有各种SSL方法的通用框架,提供一致的性能增益。我们的工作表明,在仔细选择注释数据上花费的小计算带来了大注释效率和模型性能增益,而无需改变学习管道。我们完全无监督的数据选择可以轻松扩展到其他弱监督的学习设置。
translated by 谷歌翻译
最大化模型准确性的常规配方是(1)具有各种超参数的多个模型,以及(2)选择在固定验证集中表现最佳的单个模型,从而丢弃其余部分。在本文中,我们在微调大型预训练的模型的背景下重新审视了该过程的第二步,其中微调模型通常位于单个低误差盆地中。我们表明,平均多种模型的权重以不同的超参数配置进行了微调通常提高准确性和鲁棒性。与传统的合奏不同,我们可能会平均许多模型,而不会产生任何其他推理或记忆成本 - 我们将结果称为“模型汤”。当微调大型预训练的模型,例如夹子,Align和VIT-G在JFT上预先训练的VIT-G时,我们的汤食谱可为ImageNet上的超参数扫描中的最佳模型提供显着改进。所得的VIT-G模型在Imagenet上达到90.94%的TOP-1准确性,实现了新的最新状态。此外,我们表明,模型汤方法扩展到多个图像分类和自然语言处理任务,改善分发性能,并改善新下游任务的零局部性。最后,我们通过分析将权重平衡和与logit浓度的性能相似与预测的损失和信心的平坦度联系起来,并经过经验验证这种关系。代码可从https://github.com/mlfoundations/model-soups获得。
translated by 谷歌翻译