Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.
translated by 谷歌翻译
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
translated by 谷歌翻译
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1 × 1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realisticlooking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow.
translated by 谷歌翻译
基于流量的生成模型最近已成为模拟数据生成的最有效方法之一。实际上,它们是由一系列可逆和可触觉转换构建的。Glow首先使用可逆$ 1 \ times 1 $卷积引入了一种简单的生成流。但是,与标准卷积相比,$ 1 \ times 1 $卷积的灵活性有限。在本文中,我们提出了一种新颖的可逆$ n \ times n $卷积方法,该方法克服了可逆$ 1 \ times 1 $卷积的局限性。此外,我们所提出的网络不仅可以处理和可逆,而且比标准卷积使用的参数少。CIFAR-10,ImageNet和Celeb-HQ数据集的实验表明,我们可逆的$ N \ times n $卷积有助于显着提高生成模型的性能。
translated by 谷歌翻译
最近的生成机器学习模型的进展重新推出了密码猜测领域的研究兴趣。基于GAN的数据驱动密码猜测方法和深度潜变量模型的方法显示了令人印象深刻的泛化性能,并为密码猜测提供了引人注目的属性。在本文中,我们提出了Passflow,一种基于流的生成模型方法来猜测。基于流的模型允许精确的对数似然计算和优化,这实现了精确潜在的变量推断。此外,基于流的模型提供了有意义的潜在空间表示,这使得能够探索潜在空间和插值的特定子空间。我们展示了生成流量的适用性到密码猜测的背景下,脱离了主要限于图像生成的连续空间的流网络的先前应用。我们显示Passflow能够在使用培训集中的密码猜测任务中以前的最先进的GaN的方法,这是一个训练集,该训练集是小于前一体的训练集。此外,生成的样本的定性分析表明,通信流可以准确地模拟原始密码的分布,甚至是不匹配的样本非常类似于人类的密码。
translated by 谷歌翻译
我描述了使用规定规则作为替代物的训练流模型的技巧,以最大程度地发出可能性。此技巧的实用性限制在非条件模型中,但是该方法的扩展应用于数据和条件信息的最大可能性分布的最大可能性,可用于训练复杂的\ textit \ textit {条件{条件}流模型。与以前的方法不同,此方法非常简单:它不需要明确了解条件分布,辅助网络或其他特定体系结构,或者不需要超出最大可能性的其他损失项,并且可以保留潜在空间和数据空间之间的对应关系。所得模型具有非条件流模型的所有属性,对意外输入具有鲁棒性,并且可以预测在给定输入上的解决方案的分布。它们具有预测代表性的保证,并且是解决高度不确定问题的自然和强大方法。我在易于可视化的玩具问题上演示了这些属性,然后使用该方法成功生成类条件图像并通过超分辨率重建高度退化的图像。
translated by 谷歌翻译
在这项工作中,我们为生成自动编码器的变异培训提供了确切的可能性替代方法。我们表明,可以使用可逆层来构建VAE风格的自动编码器,该层提供了可拖动的精确可能性,而无需任何正则化项。这是在选择编码器,解码器和先前体系结构的全部自由的同时实现的,这使我们的方法成为培训现有VAE和VAE风格模型的替换。我们将结果模型称为流中的自动编码器(AEF),因为编码器,解码器和先验被定义为整体可逆体系结构的单个层。我们表明,在对数可能,样本质量和降低性能的方面,该方法的性能比结构上等效的VAE高得多。从广义上讲,这项工作的主要野心是在共同的可逆性和确切的最大可能性的共同框架下缩小正常化流量和自动编码器文献之间的差距。
translated by 谷歌翻译
归一化流量是输入和潜在表示之间的基础映射,具有完全分解的分布。由于精确的可能性估值和有效的抽样,它们非常有吸引力。然而,由于杀硅约束限制了模型宽度,因此它们的有效容量通常不足。我们通过逐渐填充噪音的中间表示来解决此问题。我们根据先前可逆的单位预处理噪声,我们将其描述为交叉单元耦合。我们可逆的发光模块通过融合具有腹部自我关注的密集连接块来提高模型表达性。我们将我们的体系结构称为致密流,因为跨单元和模块内联轴器都依赖于密集的连接。实验表现出显着的改善,因为拟议的贡献和揭示了中等计算预算下的最先进的密度估算。
translated by 谷歌翻译
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
translated by 谷歌翻译
Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
translated by 谷歌翻译
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译
Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
标准化流是生成模型,其通过从简单的基本分布到复杂的目标分布的可逆性转换提供易于变换的工艺模型。然而,该技术不能直接模拟支持未知的低维歧管的数据,在诸如图像数据之类的现实世界域中的公共发生。最近的补救措施的尝试引入了击败归一化流量的中央好处的几何并发症:精确密度估计。我们通过保形嵌入流量来恢复这种福利,这是一种设计流动与贸易密度的流动的流动的框架。我们争辩说,使用培训保育嵌入的标准流量是模型支持数据的最自然的方式。为此,我们提出了一系列保形构建块,并在具有合成和实际数据的实验中应用它们,以证明流动可以在不牺牲贸易可能性的情况下模拟歧管支持的分布。
translated by 谷歌翻译
基于流量的生成模型已成为一系列重要的无监督学习方法。在这项工作中,我们结合了重新归一化组(RG)的关键思想和稀疏的先验分布,以设计基于层次的生成模型RG-Flow,该模型可以在不同的图像尺度上分离图像的不同信息,并在每个量表上提取分离的表示。我们演示了我们的合成多尺度图像数据集和Celeba数据集的方法,表明该分散表示形式可以在不同尺度上对图像的语义操纵和样式混合。为了可视化潜在表示,我们引入了基于流量的模型的接受场,并表明RG-Flow的接受场与卷积神经网络相似。此外,我们通过稀疏的拉普拉斯分布代替了广泛采用的各向同性高斯先前分布,以进一步增强表示形式的分离。从理论的角度来看,与先前具有$ O(l^2)$复杂性的生成模型相比,我们提出的方法具有$ O(\ log l)$复杂性,可用于覆盖具有边缘长度$ l $的图像。
translated by 谷歌翻译
归一化的流提供了一种优雅的生成建模方法,可以有效地采样和确切的数据分布的密度评估。但是,当在低维歧管上支持数据分布或具有非平凡的拓扑结构时,当前技术的表现性有显着局限性。我们介绍了一个新的统计框架,用于学习局部正常流的混合物作为数据歧管上的“图表图”。我们的框架增强了最近方法的表现力,同时保留了标准化流的签名特性,他们承认了精确的密度评估。我们通过量化自动编码器(VQ-AE)学习了数据歧管图表的合适地图集,并使用条件流量学习了它们的分布。我们通过实验验证我们的概率框架可以使现有方法更好地模拟数据分布,而不是复杂的歧管。
translated by 谷歌翻译
A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.
translated by 谷歌翻译
归一化流量是漫射的,通常是维持尺寸保存,使用模型的可能性训练的模型。我们使用Surve Framework通过新的层构建尺寸减少调节流量,称为漏斗。我们展示了对各种数据集的功效,并表明它改善或匹配现有流量的性能,同时具有降低的潜在空间尺寸。漏斗层可以由各种变换构成,包括限制卷积和馈送前部。
translated by 谷歌翻译
Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast twodimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.
translated by 谷歌翻译
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels. The source code is available at https://github.com/NVlabs/NVAE.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译