We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.
translated by 谷歌翻译
We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon β-VAE by providing a better trade-off between disentanglement and reconstruction quality. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them.
translated by 谷歌翻译
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we provide a sober look at recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12 000 models covering most prominent methods and evaluation metrics in a reproducible large-scale experimental study on seven different data sets. We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets.
translated by 谷歌翻译
我们提出了一种自我监督的方法,以解除高维数据变化的因素,该因素不依赖于基本变化概况的先验知识(例如,没有关于要提取单个潜在变量的数量或分布的假设)。在我们称为nashae的方法中,通过促进从所有其他编码元素中恢复的每个编码元素和恢复的元素的信息之间的差异,在标准自动编码器(AE)的低维潜在空间中完成了高维的特征分离。通过将其作为AE和回归网络合奏之间的Minmax游戏来有效地促进了分解,从而估算了一个元素,该元素以对所有其他元素的观察为条件。我们将我们的方法与使用现有的分离指标进行定量比较。此外,我们表明Nashae具有提高的可靠性和增加的能力来捕获学习潜在表示中的显着数据特征。
translated by 谷歌翻译
$ \ beta $ -vae是对变形的自身额外转换器的后续技术,提出了在VAE损失中的KL分歧项的特殊加权,以获得解除戒备的表示。即使在玩具数据集和有意义的情况下,甚至在玩具数据集上也是脆弱的学习,难以找到的难以找到的。在这里,我们调查原来的$ \β$ -VAE纸,并向先前获得的结果添加证据表明其缺乏可重复性。我们还进一步扩展了模型的实验,并在分析中包括进一步更复杂的数据集。我们还为$ \β$ -VAE模型实施了FID评分度量,并得出了对所获得的结果的定性分析。我们结束了关于可能进行的未来调查的简要讨论,以增加对索赔的更具稳健性。
translated by 谷歌翻译
本文提出了在适当的监督信息下进行分解的生成因果代表(亲爱的)学习方法。与实施潜在变量独立性的现有分解方法不同,我们考虑了一种基本利益因素可以因果关系相关的一般情况。我们表明,即使在监督下,先前具有独立先验的方法也无法解散因果关系。在这一发现的激励下,我们提出了一种称为DEAR的新的解开学习方法,该方法可以使因果可控的产生和因果代表学习。这种新公式的关键要素是使用结构性因果模型(SCM)作为双向生成模型的先验分布。然后,使用合适的GAN算法与发电机和编码器共同训练了先验,并与有关地面真相因子及其基本因果结构的监督信息合并。我们提供了有关该方法的可识别性和渐近收敛性的理论理由。我们对合成和真实数据集进行了广泛的实验,以证明DEAR在因果可控生成中的有效性,以及在样本效率和分布鲁棒性方面,学到的表示表示对下游任务的好处。
translated by 谷歌翻译
Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning; however, bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks, but the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of our new bounds for estimation and representation learning.
translated by 谷歌翻译
以无监督的方式从高维领域提取生成参数的能力是计算物理学中的非常理想尚未实现的目标。这项工作探讨了用于非线性尺寸降低的变形Autiachoders(VAES),其特定目的是{\ EM解散}的特定目标,以识别生成数据的独立物理参数。解除戒开的分解是可解释的,并且可以转移到包括生成建模,设计优化和概率减少阶级型建模的各种任务。这项工作的重大重点是使用VAE来表征解剖学,同时最小地修改经典的VAE损失功能(即证据下限)以保持高重建精度。损耗景观的特点是过度正常的局部最小值,其环绕所需的解决方案。我们通过在模型多孔流量问题中并列在模拟潜在分布和真正的生成因子中,说明了分解和纠缠符号之间的比较。展示了等级前瞻,促进了解除不诚实的表现的学习。在用旋转不变的前沿训练时,正则化损失不受潜在的旋转影响,从而学习非旋转不变的前锋有助于捕获生成因子的性质,改善解剖学。最后,表明通过标记少量样本($ O(1 \%)$)来实现半监督学习 - 导致可以一致地学习的准确脱屑潜在的潜在表示。
translated by 谷歌翻译
A grand goal in deep learning research is to learn representations capable of generalizing across distribution shifts. Disentanglement is one promising direction aimed at aligning a models representations with the underlying factors generating the data (e.g. color or background). Existing disentanglement methods, however, rely on an often unrealistic assumption: that factors are statistically independent. In reality, factors (like object color and shape) are correlated. To address this limitation, we propose a relaxed disentanglement criterion - the Hausdorff Factorized Support (HFS) criterion - that encourages a factorized support, rather than a factorial distribution, by minimizing a Hausdorff distance. This allows for arbitrary distributions of the factors over their support, including correlations between them. We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over +60% in relative improvement over existing disentanglement methods. In addition, we find that leveraging HFS for representation learning can even facilitate transfer to downstream tasks such as classification under distribution shifts. We hope our original approach and positive empirical results inspire further progress on the open problem of robust generalization.
translated by 谷歌翻译
在没有监督信号的情况下学习简洁的数据表示是机器学习的基本挑战。实现此目标的一种突出方法是基于可能性的模型,例如变异自动编码器(VAE),以基于元元素来学习潜在表示,这是对下游任务有益的一般前提(例如,disentanglement)。但是,这种方法通常偏离原始的可能性体系结构,以应用引入的元优势,从而导致他们的培训不良变化。在本文中,我们提出了一种新颖的表示学习方法,Gromov-Wasserstein自动编码器(GWAE),该方法与潜在和数据分布直接匹配。 GWAE模型不是基于可能性的目标,而是通过最小化Gromov-Wasserstein(GW)度量的训练优化。 GW度量测量了在无与伦比的空间上支持的分布之间的面向结构的差异,例如具有不同的维度。通过限制可训练的先验的家庭,我们可以介绍元主题来控制下游任务的潜在表示。与现有基于VAE的方法的经验比较表明,GWAE模型可以通过更改先前的家族而无需进一步修改GW目标来基于元家庭学习表示。
translated by 谷歌翻译
变异因素之间的相关性在现实数据中普遍存在。机器学习算法可能会受益于利用这种相关性,因为它们可以提高噪声数据的预测性能。然而,通常这种相关性不稳定(例如,它们可能在域,数据集或应用程序之间发生变化),我们希望避免利用它们。解剖学方法旨在学习捕获潜伏子空间变化不同因素的表示。常用方法涉及最小化潜伏子空间之间的相互信息,使得每个潜在的底层属性。但是,当属性相关时,这会失败。我们通过强制执行可用属性上的子空间之间的独立性来解决此问题,这允许我们仅删除不导致的依赖性,这些依赖性是由于训练数据中存在的相关结构。我们通过普发的方法实现这一目标,以最小化关于分类变量的子空间之间的条件互信息(CMI)。我们首先在理论上展示了CMI最小化是对高斯数据线性问题的稳健性解剖的良好目标。然后,我们基于MNIST和Celeba在现实世界数据集上应用我们的方法,并表明它会在相关偏移下产生脱屑和强大的模型,包括弱监督设置。
translated by 谷歌翻译
变异自动编码器(VAE)遭受后塌陷的苦难,其中用于建模和推理的强大神经网络在没有有意义使用潜在表示的情况下优化了目标。我们引入了推理评论家,通过需要潜在变量和观测值之间的对应关系来检测和激励后塌陷。通过将批评家的目标与自我监督的对比表示学习中的文献联系起来,我们从理论和经验上展示了优化推论批评家在观察和潜伏期之间增加相互信息,从而减轻后验崩溃。这种方法可以直接实施,并且需要比以前的方法要少得多的培训时间,但在三个已建立的数据集中获得了竞争结果。总体而言,该方法奠定了基础,以弥合先前与各种自动编码器的对比度学习和概率建模的框架,从而强调了两个社区在其交叉点上可能会发现的好处。
translated by 谷歌翻译
无负的对比度学习吸引了很多关注,以简单性和令人印象深刻的表现,以进行大规模预处理。但是它的解散财产仍未得到探索。在本文中,我们采用不同的无负对比度学习方法来研究这种自我监督方法的分离特性。我们发现现有的分离指标无法对高维表示模型进行有意义的测量,因此我们根据表示因素和数据因素之间的相互信息提出了一个新的分解指标。通过拟议的指标,我们首次在流行的合成数据集和现实世界数据集Celeba上首次基于无效的对比度学习的删除属性。我们的研究表明,研究的方法可以学习一个明确的表示子集。我们首次将对分离的表示学习的研究扩展到高维表示空间和无效的对比度学习。建议的度量标准的实现可在\ url {https://github.com/noahcao/disentangeslement_lib_med}中获得。
translated by 谷歌翻译
从视觉观察中了解动态系统的潜在因果因素被认为是对复杂环境中推理的推理的关键步骤。在本文中,我们提出了Citris,这是一种变异自动编码器框架,从图像的时间序列中学习因果表示,其中潜在的因果因素可能已被干预。与最近的文献相反,Citris利用了时间性和观察干预目标,以鉴定标量和多维因果因素,例如3D旋转角度。此外,通过引入归一化流,可以轻松扩展柑橘,以利用和删除已验证的自动编码器获得的删除表示形式。在标量因果因素上扩展了先前的结果,我们在更一般的环境中证明了可识别性,其中仅因果因素的某些成分受干预措施影响。在对3D渲染图像序列的实验中,柑橘类似于恢复基本因果变量的先前方法。此外,使用预验证的自动编码器,Citris甚至可以概括为因果因素的实例化,从而在SIM到现实的概括中开放了未来的研究领域,以进行因果关系学习。
translated by 谷歌翻译
变化自动编码器(VAE)最近已用于对复杂密度分布的无监督分离学习。存在许多变体,以鼓励潜在空间中的分解,同时改善重建。但是,在达到极低的重建误差和高度分离得分之间,没有人同时管理权衡。我们提出了一个普遍的框架,可以在有限的优化下应对这一挑战,并证明它在平衡重建时,它优于现有模型的最先进模型。我们介绍了三个可控的拉格朗日超级参数,以控制重建损失,KL差异损失和相关度量。我们证明,重建网络中的信息最大化等于在合理假设和约束放松下摊销过程中的信息最大化。
translated by 谷歌翻译
The combination of machine learning models with physical models is a recent research path to learn robust data representations. In this paper, we introduce p$^3$VAE, a generative model that integrates a perfect physical model which partially explains the true underlying factors of variation in the data. To fully leverage our hybrid design, we propose a semi-supervised optimization procedure and an inference scheme that comes along meaningful uncertainty estimates. We apply p$^3$VAE to the semantic segmentation of high-resolution hyperspectral remote sensing images. Our experiments on a simulated data set demonstrated the benefits of our hybrid model against conventional machine learning models in terms of extrapolation capabilities and interpretability. In particular, we show that p$^3$VAE naturally has high disentanglement capabilities. Our code and data have been made publicly available at https://github.com/Romain3Ch216/p3VAE.
translated by 谷歌翻译
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods.
translated by 谷歌翻译
数据清洁通常包括离群检测和数据修复。系统错误是由于数据反复发生的几乎确定性转换而导致的,例如特定的图像像素设置为默认值或水印。因此,容量足够的模型很容易地超出这些错误,从而使检测和修复变得困难。作为系统的离群值是干净实例和系统误差模式的模式的组合,我们的主要见解是,嵌入者可以通过模型中的较小的表示形式(子空间)来建模,而不是离群值。通过利用这一点,我们提出了清洁子空间变量自动编码器(CLSVAE),这是一种新型的半监督模型,用于检测和自动修复系统误差。主要思想是分别分别分区潜在的空间和模型模型。与以前的相关模型相比,CLSVAE的有效数据少得多,通常不到2%的数据。我们在具有不同级别的损坏和标记的集合大小的方案中使用三个图像数据集提供实验,与相关基线相比。 CLSVAE提供了无人干预的优质维修,例如与最接近的基线相比,只有标记数据的0.25%的相对误差下降了58%。
translated by 谷歌翻译
We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs-capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the CCVAE, a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints.
translated by 谷歌翻译
近似复杂的概率密度是现代统计中的核心问题。在本文中,我们介绍了变分推理(VI)的概念,这是一种机器学习中的流行方法,该方法使用优化技术来估计复杂的概率密度。此属性允许VI汇聚速度比经典方法更快,例如Markov Chain Monte Carlo采样。概念上,VI通过选择一个概率密度函数,然后找到最接近实际概率密度的家庭 - 通常使用Kullback-Leibler(KL)发散作为优化度量。我们介绍了缩窄的证据,以促进近似的概率密度,我们审查了平均场变分推理背后的想法。最后,我们讨论VI对变分式自动编码器(VAE)和VAE-生成的对抗网络(VAE-GAN)的应用。用本文,我们的目标是解释VI的概念,并通过这种方法协助协助。
translated by 谷歌翻译