变异因素之间的相关性在现实数据中普遍存在。机器学习算法可能会受益于利用这种相关性,因为它们可以提高噪声数据的预测性能。然而,通常这种相关性不稳定(例如,它们可能在域,数据集或应用程序之间发生变化),我们希望避免利用它们。解剖学方法旨在学习捕获潜伏子空间变化不同因素的表示。常用方法涉及最小化潜伏子空间之间的相互信息,使得每个潜在的底层属性。但是,当属性相关时,这会失败。我们通过强制执行可用属性上的子空间之间的独立性来解决此问题,这允许我们仅删除不导致的依赖性,这些依赖性是由于训练数据中存在的相关结构。我们通过普发的方法实现这一目标,以最小化关于分类变量的子空间之间的条件互信息(CMI)。我们首先在理论上展示了CMI最小化是对高斯数据线性问题的稳健性解剖的良好目标。然后,我们基于MNIST和Celeba在现实世界数据集上应用我们的方法,并表明它会在相关偏移下产生脱屑和强大的模型,包括弱监督设置。
translated by 谷歌翻译
本文提出了在适当的监督信息下进行分解的生成因果代表(亲爱的)学习方法。与实施潜在变量独立性的现有分解方法不同,我们考虑了一种基本利益因素可以因果关系相关的一般情况。我们表明,即使在监督下,先前具有独立先验的方法也无法解散因果关系。在这一发现的激励下,我们提出了一种称为DEAR的新的解开学习方法,该方法可以使因果可控的产生和因果代表学习。这种新公式的关键要素是使用结构性因果模型(SCM)作为双向生成模型的先验分布。然后,使用合适的GAN算法与发电机和编码器共同训练了先验,并与有关地面真相因子及其基本因果结构的监督信息合并。我们提供了有关该方法的可识别性和渐近收敛性的理论理由。我们对合成和真实数据集进行了广泛的实验,以证明DEAR在因果可控生成中的有效性,以及在样本效率和分布鲁棒性方面,学到的表示表示对下游任务的好处。
translated by 谷歌翻译
A grand goal in deep learning research is to learn representations capable of generalizing across distribution shifts. Disentanglement is one promising direction aimed at aligning a models representations with the underlying factors generating the data (e.g. color or background). Existing disentanglement methods, however, rely on an often unrealistic assumption: that factors are statistically independent. In reality, factors (like object color and shape) are correlated. To address this limitation, we propose a relaxed disentanglement criterion - the Hausdorff Factorized Support (HFS) criterion - that encourages a factorized support, rather than a factorial distribution, by minimizing a Hausdorff distance. This allows for arbitrary distributions of the factors over their support, including correlations between them. We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over +60% in relative improvement over existing disentanglement methods. In addition, we find that leveraging HFS for representation learning can even facilitate transfer to downstream tasks such as classification under distribution shifts. We hope our original approach and positive empirical results inspire further progress on the open problem of robust generalization.
translated by 谷歌翻译
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we provide a sober look at recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12 000 models covering most prominent methods and evaluation metrics in a reproducible large-scale experimental study on seven different data sets. We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets.
translated by 谷歌翻译
安全部署到现实世界的机器学习模式通常是一个具有挑战性的过程。从特定地理位置获得的数据训练的模型往往会在询问其他地方获得的数据时失败,在仿真中培训的代理可以在部署在现实世界或新颖的环境中进行适应时,以及适合于拟合的神经网络人口可能会将一些选择偏见纳入其决策过程。在这项工作中,我们描述了(i)通过(i)识别和描述了不同误差来源的新信息 - 理论观点的数据转移问题,(ii)比较最近域概括和公平探讨的一些最有前景的目标分类文献。从我们的理论分析和实证评估中,我们得出结论,需要通过关于观察到的数据,用于校正的因素的仔细考虑和数据生成过程的结构来指导模型选择程序。
translated by 谷歌翻译
We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon β-VAE by providing a better trade-off between disentanglement and reconstruction quality. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them.
translated by 谷歌翻译
We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs-capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the CCVAE, a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints.
translated by 谷歌翻译
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domaininvariant. An important assumption in this area is that the training examples are partitioned into "domains" or "environments". Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds and CivilComments datasets. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.
translated by 谷歌翻译
许多数据集被指定:给定任务存在多个同样可行的解决方案。对于学习单个假设的方法,指定的指定可能是有问题的,因为实现低训练损失的不同功能可以集中在不同的预测特征上,从而在分布数据的数据上产生明显变化的预测。我们提出了Divdis,这是一个简单的两阶段框架,首先通过利用测试分布中的未标记数据来学习多种假设,以实现任务。然后,我们通过使用其他标签的形式或检查功能可视化的形式选择最小的其他监督来选择一个发现的假设之一来消除歧义。我们证明了Divdis找到在图像分类中使用强大特征的假设和自然语言处理问题的能力。
translated by 谷歌翻译
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.
translated by 谷歌翻译
我们提出了一种自我监督的方法,以解除高维数据变化的因素,该因素不依赖于基本变化概况的先验知识(例如,没有关于要提取单个潜在变量的数量或分布的假设)。在我们称为nashae的方法中,通过促进从所有其他编码元素中恢复的每个编码元素和恢复的元素的信息之间的差异,在标准自动编码器(AE)的低维潜在空间中完成了高维的特征分离。通过将其作为AE和回归网络合奏之间的Minmax游戏来有效地促进了分解,从而估算了一个元素,该元素以对所有其他元素的观察为条件。我们将我们的方法与使用现有的分离指标进行定量比较。此外,我们表明Nashae具有提高的可靠性和增加的能力来捕获学习潜在表示中的显着数据特征。
translated by 谷歌翻译
Deep neural networks are susceptible to shortcut learning, using simple features to achieve low training loss without discovering essential semantic structure. Contrary to prior belief, we show that generative models alone are not sufficient to prevent shortcut learning, despite an incentive to recover a more comprehensive representation of the data than discriminative approaches. However, we observe that shortcuts are preferentially encoded with minimal information, a fact that generative models can exploit to mitigate shortcut learning. In particular, we propose Chroma-VAE, a two-pronged approach where a VAE classifier is initially trained to isolate the shortcut in a small latent subspace, allowing a secondary classifier to be trained on the complementary, shortcut-free latent subspace. In addition to demonstrating the efficacy of Chroma-VAE on benchmark and real-world shortcut learning tasks, our work highlights the potential for manipulating the latent space of generative classifiers to isolate or interpret specific correlations.
translated by 谷歌翻译
从视觉观察中了解动态系统的潜在因果因素被认为是对复杂环境中推理的推理的关键步骤。在本文中,我们提出了Citris,这是一种变异自动编码器框架,从图像的时间序列中学习因果表示,其中潜在的因果因素可能已被干预。与最近的文献相反,Citris利用了时间性和观察干预目标,以鉴定标量和多维因果因素,例如3D旋转角度。此外,通过引入归一化流,可以轻松扩展柑橘,以利用和删除已验证的自动编码器获得的删除表示形式。在标量因果因素上扩展了先前的结果,我们在更一般的环境中证明了可识别性,其中仅因果因素的某些成分受干预措施影响。在对3D渲染图像序列的实验中,柑橘类似于恢复基本因果变量的先前方法。此外,使用预验证的自动编码器,Citris甚至可以概括为因果因素的实例化,从而在SIM到现实的概括中开放了未来的研究领域,以进行因果关系学习。
translated by 谷歌翻译
Distributional shift is one of the major obstacles when transferring machine learning prediction systems from the lab to the real world. To tackle this problem, we assume that variation across training domains is representative of the variation we might encounter at test time, but also that shifts at test time may be more extreme in magnitude. In particular, we show that reducing differences in risk across training domains can reduce a model's sensitivity to a wide range of extreme distributional shifts, including the challenging setting where the input contains both causal and anticausal elements. We motivate this approach, Risk Extrapolation (REx), as a form of robust optimization over a perturbation set of extrapolated domains (MM-REx), and propose a penalty on the variance of training risks (V-REx) as a simpler variant. We prove that variants of REx can recover the causal mechanisms of the targets, while also providing some robustness to changes in the input distribution ("covariate shift"). By tradingoff robustness to causally induced distributional shifts and covariate shift, REx is able to outperform alternative methods such as Invariant Risk Minimization in situations where these types of shift co-occur.
translated by 谷歌翻译
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译
提出了一种新的双峰生成模型,用于生成条件样品和关节样品,并采用学习简洁的瓶颈表示的训练方法。所提出的模型被称为变异Wyner模型,是基于网络信息理论中的两个经典问题(分布式仿真和信道综合)设计的,其中Wyner的共同信息是对公共表示简洁性的基本限制。该模型是通过最大程度地减少对称的kullback的训练 - 差异 - 变异分布和模型分布之间具有正则化项,用于常见信息,重建一致性和潜在空间匹配项,该术语是通过对逆密度比率估计技术进行的。通过与合成和现实世界数据集的联合和有条件生成的实验以及具有挑战性的零照片图像检索任务,证明了所提出的方法的实用性。
translated by 谷歌翻译
Recent work reported the label alignment property in a supervised learning setting: the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix. Inspired by this observation, we derive a regularization method for unsupervised domain adaptation. Instead of regularizing representation learning as done by popular domain adaptation methods, we regularize the classifier so that the target domain predictions can to some extent ``align" with the top singular vectors of the unsupervised data matrix from the target domain. In a linear regression setting, we theoretically justify the label alignment property and characterize the optimality of the solution of our regularization by bounding its distance to the optimal solution. We conduct experiments to show that our method can work well on the label shift problems, where classic domain adaptation methods are known to fail. We also report mild improvement over domain adaptation baselines on a set of commonly seen MNIST-USPS domain adaptation tasks and on cross-lingual sentiment analysis tasks.
translated by 谷歌翻译
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.
translated by 谷歌翻译
A machine learning model, under the influence of observed or unobserved confounders in the training data, can learn spurious correlations and fail to generalize when deployed. For image classifiers, augmenting a training dataset using counterfactual examples has been empirically shown to break spurious correlations. However, the counterfactual generation task itself becomes more difficult as the level of confounding increases. Existing methods for counterfactual generation under confounding consider a fixed set of interventions (e.g., texture, rotation) and are not flexible enough to capture diverse data-generating processes. Given a causal generative process, we formally characterize the adverse effects of confounding on any downstream tasks and show that the correlation between generative factors (attributes) can be used to quantitatively measure confounding between generative factors. To minimize such correlation, we propose a counterfactual generation method that learns to modify the value of any attribute in an image and generate new images given a set of observed attributes, even when the dataset is highly confounded. These counterfactual images are then used to regularize the downstream classifier such that the learned representations are the same across various generative factors conditioned on the class label. Our method is computationally efficient, simple to implement, and works well for any number of generative factors and confounding variables. Our experimental results on both synthetic (MNIST variants) and real-world (CelebA) datasets show the usefulness of our approach.
translated by 谷歌翻译
传统的监督学习方法,尤其是深的学习方法,发现对分发超出(OOD)示例敏感,主要是因为所学习的表示与由于其域特异性相关性的变异因子混合了语义因素,而只有语义因子导致输出。为了解决这个问题,我们提出了一种基于因果推理的因果语义生成模型(CSG),以便分别建模两个因素,以及从单个训练域中的oo ood预测的制定方法,这是常见和挑战的。该方法基于因果不变原理,在变形贝斯中具有新颖的设计,用于高效学习和易于预测。从理论上讲,我们证明,在某些条件下,CSG可以通过拟合训练数据来识别语义因素,并且这种语义识别保证了泛化概率的界限和适应的成功。实证研究表明,改善了卓越的基线表现。
translated by 谷歌翻译