当我们希望将其用作生成模型时,任何显式的功能表示$ f $都会受到两个主要障碍的阻碍:设计$ f $,以便采样快速,并估计$ z = \ int f $ ^{ - 1} f $集成到1。随着$ f $本身变得复杂,这变得越来越复杂。在本文中,我们表明,当通过让网络代表目标密度的累积分布函数并应用积极的基本定理,可以通过神经网络对一维条件密度进行建模时,可以精确地计算出$ z $。 。我们还得出了一种快速算法,用于通过逆变换方法从产生的表示。通过将这些原理扩展到更高的维度,我们介绍了\ textbf {神经逆变换采样器(NITS)},这是一个新颖的深度学习框架,用于建模和从一般,多维,紧凑的概率密度。 NIT是一个高度表达性的密度估计器,具有端到端的可不同性,快速采样以及精确且廉价的可能性评估。我们通过将其应用于现实,高维密度估计任务来证明NIT的适用性:基于CIFAR-10数据集对基于可能性的生成模型,以及基于基准数据集的UCI套件的密度估计,nits可以在其中产生令人信服的结果或超越或超越或超越或超越或超越或超越或超越或超越。艺术状态。
translated by 谷歌翻译
Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
translated by 谷歌翻译
Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.
translated by 谷歌翻译
A normalizing flow models a complex probability density as an invertible transformation of a simple base density. Flows based on either coupling or autoregressive transforms both offer exact density evaluation and sampling, but rely on the parameterization of an easily invertible elementwise transformation, whose choice determines the flexibility of these models. Building upon recent work, we propose a fully-differentiable module based on monotonic rational-quadratic splines, which enhances the flexibility of both coupling and autoregressive transforms while retaining analytic invertibility. We demonstrate that neural spline flows improve density estimation, variational inference, and generative modeling of images.
translated by 谷歌翻译
正常化流动在过去几年中已经变得更加流行;然而,他们继续计算得昂贵,使得它们难以被接受到更广泛的机器学习界中。在本文中,我们介绍了一个简单的一维一层网络,其封闭形式的Lipschitz常数;使用此,我们介绍了一种新的精确嘴唇流(ELF),这些流量(ELF)结合了剩余流量的易于采样,并具有自回归流的强烈性能。此外,我们表明,与多个其他流相比,ELF被证明是通用密度近似器,更新和参数有效,并且在多个大规模数据集上实现最先进的性能。
translated by 谷歌翻译
标准化流动,扩散归一化流量和变形自动置换器是强大的生成模型。在本文中,我们提供了一个统一的框架来通过马尔可夫链处理这些方法。实际上,我们考虑随机标准化流量作为一对马尔可夫链,满足一些属性,并表明许多用于数据生成的最先进模型适合该框架。马尔可夫链的观点使我们能够将确定性层作为可逆的神经网络和随机层作为大都会加速层,Langevin层和变形自身偏移,以数学上的声音方式。除了具有Langevin层的密度的层,扩散层或变形自身形式,也可以处理与确定性层或大都会加热器层没有密度的层。因此,我们的框架建立了一个有用的数学工具来结合各种方法。
translated by 谷歌翻译
We propose the tensorizing flow method for estimating high-dimensional probability density functions from the observed data. The method is based on tensor-train and flow-based generative modeling. Our method first efficiently constructs an approximate density in the tensor-train form via solving the tensor cores from a linear system based on the kernel density estimators of low-dimensional marginals. We then train a continuous-time flow model from this tensor-train density to the observed empirical distribution by performing a maximum likelihood estimation. The proposed method combines the optimization-less feature of the tensor-train with the flexibility of the flow-based generative models. Numerical results are included to demonstrate the performance of the proposed method.
translated by 谷歌翻译
标准化流是构建概率和生成模型的流行方法。但是,由于需要计算雅各布人的计算昂贵决定因素,因此对流量的最大似然训练是具有挑战性的。本文通过引入一种受到两样本测试启发的流动训练的方法来解决这一挑战。我们框架的核心是能源目标,这是适当评分规则的多维扩展,该规则基于随机预测,可以接受有效的估计器,并且超过了一系列可以在我们的框架中得出的替代两样本目标。至关重要的是,能量目标及其替代方案不需要计算决定因素,因此支持不适合最大似然训练的一般流量体系结构(例如,密度连接的网络)。我们从经验上证明,能量流达到竞争性生成建模性能,同时保持快速产生和后部推断。
translated by 谷歌翻译
机器学习的许多应用涉及预测模型输出的灵活概率分布。我们提出了自动评级分位式流动,这是一种灵活的概率模型,高维变量,可用于准确地捕获预测的炼膜不确定性。这些模型是根据适当评分规则使用新颖目标培训的自回归流动的情况,这简化了培训期间雅各比亚的计算昂贵的决定因素,并支持新型的神经结构。我们证明这些模型可用于参数化预测条件分布,提高时间序列预测和对象检测的概率预测质量。
translated by 谷歌翻译
速率 - 失真(R-D)函数,信息理论中的关键数量,其特征在于,通过任何压缩算法,通过任何压缩算法将数据源可以压缩到保真标准的基本限制。随着研究人员推动了不断提高的压缩性能,建立给定数据源的R-D功能不仅具有科学的兴趣,而且还在可能的空间上揭示了改善压缩算法的可能性。以前的解决此问题依赖于数据源上的分布假设(Gibson,2017)或仅应用于离散数据。相比之下,本文使得第一次尝试播放常规(不一定是离散的)源仅需要i.i.d的算法的算法。数据样本。我们估计高斯和高尺寸香蕉形源的R-D三明治界,以及GaN生成的图像。我们在自然图像上的R-D上限表示在各种比特率的PSNR中提高最先进的图像压缩方法的性能的空间。
translated by 谷歌翻译
度量的运输提供了一种用于建模复杂概率分布的多功能方法,并具有密度估计,贝叶斯推理,生成建模及其他方法的应用。单调三角传输地图$ \ unicode {x2014} $近似值$ \ unicode {x2013} $ rosenblatt(kr)重新安排$ \ unicode {x2014} $是这些任务的规范选择。然而,此类地图的表示和参数化对它们的一般性和表现力以及对从数据学习地图学习(例如,通过最大似然估计)出现的优化问题的属性产生了重大影响。我们提出了一个通用框架,用于通过平滑函数的可逆变换来表示单调三角图。我们建立了有关转化的条件,以使相关的无限维度最小化问题没有伪造的局部最小值,即所有局部最小值都是全球最小值。我们展示了满足某些尾巴条件的目标分布,唯一的全局最小化器与KR地图相对应。鉴于来自目标的样品,我们提出了一种自适应算法,该算法估计了基础KR映射的稀疏半参数近似。我们证明了如何将该框架应用于关节和条件密度估计,无可能的推断以及有向图形模型的结构学习,并在一系列样本量之间具有稳定的概括性能。
translated by 谷歌翻译
三角形流量,也称为kn \“{o}的Rosenblatt测量耦合,包括用于生成建模和密度估计的归一化流模型的重要构建块,包括诸如实值的非体积保存变换模型的流行自回归流模型(真实的NVP)。我们提出了三角形流量统计模型的统计保证和样本复杂性界限。特别是,我们建立了KN的统计一致性和kullback-leibler估算器的rospblatt的kullback-leibler估计的有限样本会聚率使用实证过程理论的工具测量耦合。我们的结果突出了三角形流动下播放功能类的各向异性几何形状,优化坐标排序,并导致雅各比比流动的统计保证。我们对合成数据进行数值实验,以说明我们理论发现的实际意义。
translated by 谷歌翻译
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译
The modeling of probability distributions, specifically generative modeling and density estimation, has become an immensely popular subject in recent years by virtue of its outstanding performance on sophisticated data such as images and texts. Nevertheless, a theoretical understanding of its success is still incomplete. One mystery is the paradox between memorization and generalization: In theory, the model is trained to be exactly the same as the empirical distribution of the finite samples, whereas in practice, the trained model can generate new samples or estimate the likelihood of unseen samples. Likewise, the overwhelming diversity of distribution learning models calls for a unified perspective on this subject. This paper provides a mathematical framework such that all the well-known models can be derived based on simple principles. To demonstrate its efficacy, we present a survey of our results on the approximation error, training error and generalization error of these models, which can all be established based on this framework. In particular, the aforementioned paradox is resolved by proving that these models enjoy implicit regularization during training, so that the generalization error at early-stopping avoids the curse of dimensionality. Furthermore, we provide some new results on landscape analysis and the mode collapse phenomenon.
translated by 谷歌翻译
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
translated by 谷歌翻译
条件密度估计(CDE)是估算某些输入上的事件概率的任务。神经网络(NN)还可用于计算连续域的输出分布,这可以被视为回归任务的扩展。然而,在不知道其一般形式的信息的情况下,难以明确地近似分布。为了符合任意条件分布,将连续域分离成箱是一种有效的策略,只要我们拥有足够窄的箱和非常大的数据。然而,收集足够的数据通常很难到达,并且在许多情况下,特别是在多变量Cde的诅咒中的诅咒中的那种理想。在本文中,我们展示了使用基于Deconvolution的神经网络框架建模自由形式条件分布的好处,在离散化中应对数据缺陷问题。它具有灵活性的优点,但也利用了解压缩层提供的分层平滑度。我们将我们的方法与许多其他密度估计方法进行比较,并表明我们的解卷积密度网络(DDN)优于许多单变量和多变量任务的竞争方法。 DDN的代码可在https://github.com/nbiclab/ddn上获得。
translated by 谷歌翻译
For distributions $\mathbb{P}$ and $\mathbb{Q}$ with different supports or undefined densities, the divergence $\textrm{D}(\mathbb{P}||\mathbb{Q})$ may not exist. We define a Spread Divergence $\tilde{\textrm{D}}(\mathbb{P}||\mathbb{Q})$ on modified $\mathbb{P}$ and $\mathbb{Q}$ and describe sufficient conditions for the existence of such a divergence. We demonstrate how to maximize the discriminatory power of a given divergence by parameterizing and learning the spread. We also give examples of using a Spread Divergence to train implicit generative models, including linear models (Independent Components Analysis) and non-linear models (Deep Generative Networks).
translated by 谷歌翻译
基于分数的生成模型在发电质量和可能性方面具有出色的性能。他们通过将参数化的分数网络与一阶数据得分功能匹配来建模数据分布。分数网络可用于定义ODE(“基于得分的扩散ode”),以进行精确的似然评估。但是,颂歌的可能性与得分匹配目标之间的关系尚不清楚。在这项工作中,我们证明,匹配一阶得分不足以通过在最大可能性和分数匹配目标之间显示差距来最大化ode的可能性。为了填补这一空白,我们表明,可以通过控制第一,第二和三阶得分匹配错误来界定颂歌的负可能性;我们进一步提出了一种新型的高阶denoising评分匹配方法,以实现基于得分的扩散ODE的最大似然训练。我们的算法确保高阶匹配误差受训练错误和较低级错误的限制。我们从经验上观察到,通过高阶匹配,基于得分的扩散频率在合成数据和CIFAR-10上都具有更好的可能性,同时保留了高生成质量。
translated by 谷歌翻译
学习分配的尾巴行为是一个众所周知的困难问题。从定义上讲,尾部的样品数量很小,深层生成模型(例如归一化流量)倾向于集中于学习分布的身体。在本文中,我们专注于提高归一化流以正确捕获尾巴行为的能力,从而形成更准确的模型。我们证明,可以通过其基本分布的边缘的尾巴来控制自回归流的边际尾巴。这种理论上的见解使我们获得了一种基于灵活的基础分布和数据驱动线性层的新型流量。经验分析表明,所提出的方法提高了准确性(尤其是在分布的尾巴上),并能够生成重尾数据。我们证明了它在天气和气候示例中的应用,其中捕获尾巴行为至关重要。
translated by 谷歌翻译
基于似然或显式的深层生成模型使用神经网络来构建灵活的高维密度。该公式直接与歧管假设相矛盾,该假设指出,观察到的数据位于嵌入高维环境空间中的低维歧管上。在本文中,我们研究了在这种维度不匹配的情况下,最大可能的训练的病理。我们正式证明,在学习歧管本身而不是分布的情况下,可以实现堕落的优点,而我们称之为多种歧视的现象过于拟合。我们提出了一类两步程序,该过程包括降低降低步骤,然后进行最大样子密度估计,并证明它们在非参数方面恢复了数据生成分布,从而避免了多种歧视。我们还表明,这些过程能够对隐式模型(例如生成对抗网络)学到的流形进行密度估计,从而解决了这些模型的主要缺点。最近提出的几种方法是我们两步程序的实例。因此,我们统一,扩展和理论上证明了一大批模型。
translated by 谷歌翻译