A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.
translated by 谷歌翻译
Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
translated by 谷歌翻译
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1 × 1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realisticlooking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow.
translated by 谷歌翻译
Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.
translated by 谷歌翻译
归一化流是突出的深层生成模型,提供了易诊的概率分布和有效密度估计。但是,众所周知,在检测到分配(OOD)输入时,它们是众所周知的,因为它们直接在其潜在空间中对输入表示的本地特征进行了编码。在本文中,我们通过演示流动,如果通过注意机制延伸,可以通过表明流动,可以可靠地检测到包括对抗攻击的异常值。我们的方法不需要对培训的异常数据,并通过在多样化的实验设置中报告最先进的性能来展示我们的ood检测方法的效率。代码在https://github.com/computationalradiationphysphysics/inflow上提供。
translated by 谷歌翻译
归一化流量是漫射的,通常是维持尺寸保存,使用模型的可能性训练的模型。我们使用Surve Framework通过新的层构建尺寸减少调节流量,称为漏斗。我们展示了对各种数据集的功效,并表明它改善或匹配现有流量的性能,同时具有降低的潜在空间尺寸。漏斗层可以由各种变换构成,包括限制卷积和馈送前部。
translated by 谷歌翻译
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译
在这项工作中,我们为生成自动编码器的变异培训提供了确切的可能性替代方法。我们表明,可以使用可逆层来构建VAE风格的自动编码器,该层提供了可拖动的精确可能性,而无需任何正则化项。这是在选择编码器,解码器和先前体系结构的全部自由的同时实现的,这使我们的方法成为培训现有VAE和VAE风格模型的替换。我们将结果模型称为流中的自动编码器(AEF),因为编码器,解码器和先验被定义为整体可逆体系结构的单个层。我们表明,在对数可能,样本质量和降低性能的方面,该方法的性能比结构上等效的VAE高得多。从广义上讲,这项工作的主要野心是在共同的可逆性和确切的最大可能性的共同框架下缩小正常化流量和自动编码器文献之间的差距。
translated by 谷歌翻译
基于流量的生成模型最近已成为模拟数据生成的最有效方法之一。实际上,它们是由一系列可逆和可触觉转换构建的。Glow首先使用可逆$ 1 \ times 1 $卷积引入了一种简单的生成流。但是,与标准卷积相比,$ 1 \ times 1 $卷积的灵活性有限。在本文中,我们提出了一种新颖的可逆$ n \ times n $卷积方法,该方法克服了可逆$ 1 \ times 1 $卷积的局限性。此外,我们所提出的网络不仅可以处理和可逆,而且比标准卷积使用的参数少。CIFAR-10,ImageNet和Celeb-HQ数据集的实验表明,我们可逆的$ N \ times n $卷积有助于显着提高生成模型的性能。
translated by 谷歌翻译
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
translated by 谷歌翻译
标准化流是生成模型,其通过从简单的基本分布到复杂的目标分布的可逆性转换提供易于变换的工艺模型。然而,该技术不能直接模拟支持未知的低维歧管的数据,在诸如图像数据之类的现实世界域中的公共发生。最近的补救措施的尝试引入了击败归一化流量的中央好处的几何并发症:精确密度估计。我们通过保形嵌入流量来恢复这种福利,这是一种设计流动与贸易密度的流动的流动的框架。我们争辩说,使用培训保育嵌入的标准流量是模型支持数据的最自然的方式。为此,我们提出了一系列保形构建块,并在具有合成和实际数据的实验中应用它们,以证明流动可以在不牺牲贸易可能性的情况下模拟歧管支持的分布。
translated by 谷歌翻译
基于似然或显式的深层生成模型使用神经网络来构建灵活的高维密度。该公式直接与歧管假设相矛盾,该假设指出,观察到的数据位于嵌入高维环境空间中的低维歧管上。在本文中,我们研究了在这种维度不匹配的情况下,最大可能的训练的病理。我们正式证明,在学习歧管本身而不是分布的情况下,可以实现堕落的优点,而我们称之为多种歧视的现象过于拟合。我们提出了一类两步程序,该过程包括降低降低步骤,然后进行最大样子密度估计,并证明它们在非参数方面恢复了数据生成分布,从而避免了多种歧视。我们还表明,这些过程能够对隐式模型(例如生成对抗网络)学到的流形进行密度估计,从而解决了这些模型的主要缺点。最近提出的几种方法是我们两步程序的实例。因此,我们统一,扩展和理论上证明了一大批模型。
translated by 谷歌翻译
我们研究是否使用两个条件型号$ p(x | z)$和$ q(z | x)$,以使用循环的两个条件型号,我们如何建模联合分配$ p(x,z)$。这是通过观察到深入生成模型的动机,除了可能的型号$ p(x | z)$,通常也使用推理型号$ q(z | x)$来提取表示,但它们通常依赖不表征的先前分配$ P(z)$来定义联合分布,这可能会使后塌和歧管不匹配等问题。为了探讨仅使用$ p(x | z)$和$ q(z | x)$模拟联合分布的可能性,我们研究其兼容性和确定性,对应于其条件分布一致的联合分布的存在和唯一性跟他们。我们为可操作的等价标准开发了一般理论,以实现兼容性,以及足够的确定条件。基于该理论,我们提出了一种新颖的生成建模框架来源,仅使用两个循环条件模型。我们开发方法以实现兼容性和确定性,并使用条件模型适合和生成数据。通过预先删除的约束,Cygen更好地适合数据并捕获由合成和现实世界实验支持的更多代表性特征。
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译
A normalizing flow models a complex probability density as an invertible transformation of a simple base density. Flows based on either coupling or autoregressive transforms both offer exact density evaluation and sampling, but rely on the parameterization of an easily invertible elementwise transformation, whose choice determines the flexibility of these models. Building upon recent work, we propose a fully-differentiable module based on monotonic rational-quadratic splines, which enhances the flexibility of both coupling and autoregressive transforms while retaining analytic invertibility. We demonstrate that neural spline flows improve density estimation, variational inference, and generative modeling of images.
translated by 谷歌翻译
The problem of detecting the Out-of-Distribution (OoD) inputs is of paramount importance for Deep Neural Networks. It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable and often tend to make over-confident predictions for OoDs, assigning to them a higher density than to the in-distribution data. This over-confidence in a single model can be potentially mitigated with Bayesian inference over the model parameters that take into account epistemic uncertainty. This paper investigates three approaches to Bayesian inference: stochastic gradient Markov chain Monte Carlo, Bayes by Backpropagation, and Stochastic Weight Averaging-Gaussian. The inference is implemented over the weights of the deep neural networks that parameterize the likelihood of the Variational Autoencoder. We empirically evaluate the approaches against several benchmarks that are often used for OoD detection: estimation of the marginal likelihood utilizing sampled model ensemble, typicality test, disagreement score, and Watanabe-Akaike Information Criterion. Finally, we introduce two simple scores that demonstrate the state-of-the-art performance.
translated by 谷歌翻译
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.
translated by 谷歌翻译
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both stateof-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
translated by 谷歌翻译
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
translated by 谷歌翻译
归一化流量是具有易于易变量的神经网络的可逆性网络,其允许通过最大可能性优化它们的参数来有效地执行。然而,通常假设感兴趣的数据生活在嵌入在高维环境空间中的一些(通常未知)的低维歧管中。结果是自建设中以来的建模不匹配 - 可逆性要求意味着学习分布的高维支持。注射流量,从低到高维空间的映射,旨在通过学习歧管的分布来解决这种差异,但是由此产生的体积变化术语变得更具挑战性。目前方法避免完全使用各种启发式计算该术语,或者假设歧管预先已知,因此不广泛适用。相反,我们提出了两种方法来对模型的参数来促进该术语的梯度,依赖于仔细使用来自数值线性代数的自动分化和技术。两种方法都对将其投射到这种歧管上的数据执行端到端非线性歧管学习和密度估计。我们研究了我们所提出的方法之间的权衡,经验验证我们优于更准确地学习歧管和对应的相应分布忽略音量变化术语的优先级,并显示出对分布外检测的有希望的结果。我们的代码可在https://github.com/layer6ai-labs/rectangular-flows中找到。
translated by 谷歌翻译