我们探索一种以数据为基础的学习方法来优化神经网络。我们构建神经网络检查点的数据集,并培训有关参数的生成模型。特别是,我们的模型是一个条件扩散变压器,鉴于初始输入参数向量以及提示的丢失,误差或返回,可以预测实现所需度量的参数更新的分布。在测试时,它可以在一个更新中优化具有看不见的参数的神经网络。我们发现我们的方法成功地生成了各种损失提示的参数。此外,它可以采样多模式参数解决方案,并具有有利的缩放属性。我们将方法应用于监督和强化学习中的不同神经网络体系结构和任务。
translated by 谷歌翻译
Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general-purpose in-context learning algorithms from scratch, using only black-box models with minimal inductive bias. Such a model takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks, and meta-optimization. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count. Finally, we propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose learning algorithms.
translated by 谷歌翻译
We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.
translated by 谷歌翻译
优化在开发机器学习系统中起着昂贵且至关重要的作用。在学习的优化器中,常用手工设计的优化器的少数超参数,例如Adam或SGD用灵活的参数函数代替。然后对这些功能的参数进行优化,以便所得的学习优化器最大程度地减少所选模型类别的目标损失。学识渊博的优化者都可以减少所需的训练步骤的数量并改善最终测试损失。但是,它们的训练可能很昂贵,一旦训练,由于优化器本身的计算和内存开销,使用训练可能很昂贵。在这项工作中,我们确定并量化了许多学习和手工设计的优化器的内存,计算和性能权衡的设计功能。我们进一步利用我们的分析来构建比以前的工作更快,更有效的学习优化器。我们的模型和培训代码是开源的。
translated by 谷歌翻译
In recent years, deep learning has infiltrated every field it has touched, reducing the need for specialist knowledge and automating the process of knowledge discovery from data. This review argues that astronomy is no different, and that we are currently in the midst of a deep learning revolution that is transforming the way we do astronomy. We trace the history of astronomical connectionism from the early days of multilayer perceptrons, through the second wave of convolutional and recurrent neural networks, to the current third wave of self-supervised and unsupervised deep learning. We then predict that we will soon enter a fourth wave of astronomical connectionism, in which finetuned versions of an all-encompassing 'foundation' model will replace expertly crafted deep learning models. We argue that such a model can only be brought about through a symbiotic relationship between astronomy and connectionism, whereby astronomy provides high quality multimodal data to train the foundation model, and in turn the foundation model is used to advance astronomical research.
translated by 谷歌翻译
Denoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications, DDPMs can also achieve competitive loglikelihoods while maintaining high sample quality. Additionally, we find that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality, which is important for the practical deployment of these models. We additionally use precision and recall to compare how well DDPMs and GANs cover the target distribution. Finally, we show that the sample quality and likelihood of these models scale smoothly with model capacity and training compute, making them easily scalable. We release our code at https://github.com/ openai/improved-diffusion.
translated by 谷歌翻译
It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level: what would it take to perform deep learning on these functions instead, treating them as data? In this context we refer to the data as functa, and propose a framework for deep learning on functa. This view presents a number of challenges around efficient conversion from data to functa, compact representation of functa, and effectively solving downstream tasks on functa. We outline a recipe to overcome these challenges and apply it to a wide range of data modalities including images, 3D shapes, neural radiance fields (NeRF) and data on manifolds. We demonstrate that this approach has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification. Code: https://github.com/deepmind/functa
translated by 谷歌翻译
利用深度学习的最新进展,文本到图像生成模型目前具有吸引公众关注的优点。其中两个模型Dall-E 2和Imagen已经证明,可以从图像的简单文本描述中生成高度逼真的图像。基于一种称为扩散模型的新型图像生成方法,文本对图像模型可以生产许多不同类型的高分辨率图像,其中人类想象力是唯一的极限。但是,这些模型需要大量的计算资源来训练,并处理从互联网收集的大量数据集。此外,代码库和模型均未发布。因此,它可以防止AI社区尝试这些尖端模型,从而使其结果复制变得复杂,即使不是不可能。在本文中,我们的目标是首先回顾这些模型使用的不同方法和技术,然后提出我们自己的文本模型模型实施。高度基于DALL-E 2,我们引入了一些轻微的修改,以应对所引起的高计算成本。因此,我们有机会进行实验,以了解这些模型的能力,尤其是在低资源制度中。特别是,我们提供了比Dall-e 2的作者(包括消融研究)更深入的分析。此外,扩散模型使用所谓的指导方法来帮助生成过程。我们引入了一种新的指导方法,该方法可以与其他指导方法一起使用,以提高图像质量。最后,我们的模型产生的图像质量相当好,而不必维持最先进的文本对图像模型的重大培训成本。
translated by 谷歌翻译
考虑到其协变量$ \ boldsymbol x $的连续或分类响应变量$ \ boldsymbol y $的分布是统计和机器学习中的基本问题。深度神经网络的监督学习算法在预测给定$ \ boldsymbol x $的$ \ boldsymbol y $的平均值方面取得了重大进展,但是他们经常因其准确捕捉预测的不确定性的能力而受到批评。在本文中,我们引入了分类和回归扩散(卡)模型,该模型结合了基于扩散的条件生成模型和预训练的条件估计器,以准确预测给定$ \ boldsymbol y $的分布,给定$ \ boldsymbol x $。我们证明了通过玩具示例和现实世界数据集的有条件分配预测的卡片的出色能力,实验结果表明,一般的卡在一般情况下都优于最先进的方法,包括基于贝叶斯的神经网络的方法专为不确定性估计而设计,尤其是当给定$ \ boldsymbol y $的条件分布给定的$ \ boldsymbol x $是多模式时。
translated by 谷歌翻译
Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pretrained models. We are also competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0% top-1 accuracy on a linear probe of our features.
translated by 谷歌翻译
对机器学习和创造力领域的兴趣越来越大。这项调查概述了计算创造力理论,关键机器学习技术(包括生成深度学习)和相应的自动评估方法的历史和现状。在对该领域的主要贡献进行了批判性讨论之后,我们概述了当前的研究挑战和该领域的新兴机会。
translated by 谷歌翻译
Transformers have become the state-of-the-art neural network architecture across numerous domains of machine learning. This is partly due to their celebrated ability to transfer and to learn in-context based on few examples. Nevertheless, the mechanisms by which Transformers become in-context learners are not well understood and remain mostly an intuition. Here, we argue that training Transformers on auto-regressive tasks can be closely related to well-known gradient-based meta-learning formulations. We start by providing a simple weight construction that shows the equivalence of data transformations induced by 1) a single linear self-attention layer and by 2) gradient-descent (GD) on a regression loss. Motivated by that construction, we show empirically that when training self-attention-only Transformers on simple regression tasks either the models learned by GD and Transformers show great similarity or, remarkably, the weights found by optimization match the construction. Thus we show how trained Transformers implement gradient descent in their forward pass. This allows us, at least in the domain of regression problems, to mechanistically understand the inner workings of optimized Transformers that learn in-context. Furthermore, we identify how Transformers surpass plain gradient descent by an iterative curvature correction and learn linear models on deep data representations to solve non-linear regression tasks. Finally, we discuss intriguing parallels to a mechanism identified to be crucial for in-context learning termed induction-head (Olsson et al., 2022) and show how it could be understood as a specific case of in-context learning by gradient descent learning within Transformers.
translated by 谷歌翻译
Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement. Their success hinges on the fact that the underlying physical phenomena are continuous. For inherently discrete and categorical data such as language, various diffusion-inspired alternatives have been proposed. However, the continuous nature of diffusion models conveys many benefits, and in this work we endeavour to preserve it. We propose CDCD, a framework for modelling categorical data with diffusion models that are continuous both in time and input space. We demonstrate its efficacy on several language modelling tasks.
translated by 谷歌翻译
DeNoising扩散模型代表了计算机视觉中最新的主题,在生成建模领域表现出了显着的结果。扩散模型是一个基于两个阶段的深层生成模型,一个正向扩散阶段和反向扩散阶段。在正向扩散阶段,通过添加高斯噪声,输入数据在几个步骤中逐渐受到干扰。在反向阶段,模型的任务是通过学习逐步逆转扩散过程来恢复原始输入数据。尽管已知的计算负担,即由于采样过程中涉及的步骤数量,扩散模型对生成样品的质量和多样性得到了广泛赞赏。在这项调查中,我们对视觉中应用的denoising扩散模型的文章进行了全面综述,包括该领域的理论和实际贡献。首先,我们识别并介绍了三个通用扩散建模框架,这些框架基于扩散概率模型,噪声调节得分网络和随机微分方程。我们进一步讨论了扩散模型与其他深层生成模型之间的关系,包括变异自动编码器,生成对抗网络,基于能量的模型,自回归模型和正常流量。然后,我们介绍了计算机视觉中应用的扩散模型的多角度分类。最后,我们说明了扩散模型的当前局限性,并设想了一些有趣的未来研究方向。
translated by 谷歌翻译
Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to autoregressive language generation. We instead view diffusion as a complementary method that can augment the generative capabilities of existing pre-trained language models. We demonstrate that continuous diffusion models can be learned in the latent space of a pre-trained encoder-decoder model, enabling us to sample continuous latent representations that can be decoded into natural language with the pre-trained decoder. We show that our latent diffusion models are more effective at sampling novel text from data distributions than a strong autoregressive baseline and also enable controllable generation.
translated by 谷歌翻译
微调加强学习(RL)模型由于缺乏大规模的现成数据集以及不同环境之间可传递性的较高差异而变得具有挑战性。最近的工作着眼于从序列建模的角度来应对离线RL,并通过引入变压器体系结构的结果得到改进的结果。但是,当模型从头开始训练时,它会遭受缓慢的收敛速度。在本文中,我们希望利用这种强化学习作为序列建模的表述,并研究在离线RL任务(控制,游戏)上进行填充时,在其他领域(视觉,语言)上进行了预训练的序列模型的可传递性。为此,我们还提出了改善这些域之间传递的技术。结果表明,在各种环境上的收敛速度和奖励方面,表现出一致的性能,加速了3-6倍的训练,并使用Wikipedia-pretrenained and GPT2语言模型在各种任务中实现了最先进的绩效。我们希望这项工作不仅为RL利用通用序列建模技术和预训练模型的潜力带来启发,而且还激发了未来的工作,在完全不同领域的生成建模任务之间共享知识。
translated by 谷歌翻译
我们介绍了SubGD,这是一种新颖的几声学习方法,基于最近的发现,即随机梯度下降更新往往生活在低维参数子空间中。在实验和理论分析中,我们表明模型局限于合适的预定义子空间,可以很好地推广用于几次学习。合适的子空间符合给定任务的三个标准:IT(a)允许通过梯度流量减少训练误差,(b)导致模型良好的模型,并且(c)可以通过随机梯度下降来识别。 SUBGD从不同任务的更新说明的自动相关矩阵的特征组合中标识了这些子空间。明确的是,我们可以识别出低维合适的子空间,用于对动态系统的几次学习,而动态系统具有不同的属性,这些属性由分析系统描述的一个或几个参数描述。这种系统在科学和工程领域的现实应用程序中无处不在。我们在实验中证实了SubGD在三个不同的动态系统问题设置上的优势,在样本效率和性能方面,均超过了流行的几次学习方法。
translated by 谷歌翻译
中文学习是指模型在及时序列中条件条件的能力,该序列由内部下文示例(输入输出对,与某些任务相对应)以及新的查询输入,并生成相应的输出。至关重要的是,内在学习仅在推理时间发生,而没有任何参数更新模型。尽管大型语言模型(例如GPT-3)具有某种能力来执行中文学习的能力,但尚不清楚任务成功的任务之间的关系以及培训数据中存在的内容。为了取得进步朝着理解文本学习的进步,我们考虑了训练模型的明确定义的问题,以学习函数类(例如,线性函数):也就是说,给定的数据从类中的某些功能衍生而成,可以我们训练一个模型以在此课程中学习“大多数”功能?我们从经验上表明,可以从头开始训练标准变压器,以执行线性函数的文本学习 - 也就是说,训练有素的模型能够从具有与最佳最小二乘估计器相当的性能的示例中学习看不见的线性函数。实际上,即使在两种形式的分配变化下,也可能进行中文学习:(i)模型的训练数据和推理时间提示之间,以及(ii)在推理过程中的内在示例和查询输入之间。我们还表明,我们可以训练变形金刚在文本中学习更多复杂的功能类,即稀疏线性功能,两层神经网络和决策树 - 具有匹配或超过特定于任务特定的学习算法的性能。我们的代码和模型可在https://github.com/dtsip/in-context-learning上找到。
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
基于扩散的生成模型已经证明了感知上令人印象深刻的合成能力,但是它们也可以是基于可能性的模型吗?我们以肯定的方式回答了这一点,并介绍了一个基于扩散的生成模型家族,该模型可以在标准图像密度估计基准上获得最先进的可能性。与其他基于扩散的模型不同,我们的方法允许与其他模型的其余部分共同对噪声时间表进行有效优化。我们表明,根据扩散数据的信噪比,变异下限(VLB)简化为非常短的表达,从而改善了我们对该模型类别的理论理解。使用这种见解,我们证明了文献中提出的几个模型之间的等效性。此外,我们表明连续时间VLB在噪声方面不变,除了其端点处的信噪比。这使我们能够学习一个噪声时间表,以最大程度地减少所得VLB估计器的差异,从而更快地优化。将这些进步与建筑改进相结合,我们获得了图像密度估计基准的最先进的可能性,超过了多年来主导这些基准测试的自回旋模型,通常优化了很多年。此外,我们展示了如何将模型用作BITS背包压缩方案的一部分,并展示了接近理论最佳的无损压缩率。代码可在https://github.com/google-research/vdm上找到。
translated by 谷歌翻译