因果推理提供了一种语言,以提出纯粹统计关联以外的重要介入和反事实问题。例如,在医学成像中,我们可能希望研究遗传,环境或生活方式因素对解剖表型正常和病理变异的因果关系。但是,尽管可以可靠地构建从自动图像分割中提取的3D表面网格的解剖形状模型,但缺乏计算工具来实现有关形态变化的因果推理。为了解决这个问题,我们提出了深层结构性因果形状模型(CSM),该模型利用了高质量的网格生成技术,从几何深度学习,在深层结构性因果模型的表达框架内。 CSM可以通过反事实网格产生来实现特定于受试者的预后(“如果患者大十岁,该患者的大脑结构将如何变化?”),这与大多数当前有关纯粹人口级统计形状建模的作品形成鲜明对比。我们通过许多定性和定量实验利用了3D脑结构的大数据集,证明了Pearl因果关系层次结构的所有级别CSM的能力。
translated by 谷歌翻译
Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and nonlinear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We show that, replacing the expression space of an existing state-of-theart face model with our model, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.
translated by 谷歌翻译
改变特定特征但不是其他特性的输入扰动的反事实示例 - 已经显示用于评估机器学习模型的偏差,例如,对特定的人口组。然而,由于图像的各种特征上的底层的因果结构,生成用于图像的反事实示例是非琐碎的。为了有意义,生成的扰动需要满足因果模型所暗示的约束。我们通过在前瞻性学习推断(ALI)的改进变型中结合结构因果模型(SCM)来提出一种方法,该方法是根据图像的属性之间的因果关系生成反事实。基于所生成的反事实,我们展示了如何解释预先训练的机器学习分类器,评估其偏置,并使用反事实程序缓解偏差。在Morpho-Mnist DataSet上,我们的方法会在质量上产生与基于SCM的Factficuls(DeepScm)的质量相当的反功能,而在更复杂的Celeba DataSet上,我们的方法优于DeepScm在产生高质量的有效反应性时。此外,生成的反事件难以从人类评估实验中的重建图像中无法区分,并且随后使用它们来评估在Celeba数据上培训的标准分类器的公平性。我们表明分类器是偏见的w.r.t.皮肤和头发颜色,以及反事实规则化如何消除这些偏差。
translated by 谷歌翻译
从视觉观察中了解动态系统的潜在因果因素被认为是对复杂环境中推理的推理的关键步骤。在本文中,我们提出了Citris,这是一种变异自动编码器框架,从图像的时间序列中学习因果表示,其中潜在的因果因素可能已被干预。与最近的文献相反,Citris利用了时间性和观察干预目标,以鉴定标量和多维因果因素,例如3D旋转角度。此外,通过引入归一化流,可以轻松扩展柑橘,以利用和删除已验证的自动编码器获得的删除表示形式。在标量因果因素上扩展了先前的结果,我们在更一般的环境中证明了可识别性,其中仅因果因素的某些成分受干预措施影响。在对3D渲染图像序列的实验中,柑橘类似于恢复基本因果变量的先前方法。此外,使用预验证的自动编码器,Citris甚至可以概括为因果因素的实例化,从而在SIM到现实的概括中开放了未来的研究领域,以进行因果关系学习。
translated by 谷歌翻译
在面孔和机构的3D生成模型中学习解除一致,可解释和结构化的潜在代表仍然是一个开放的问题。当需要对身份特征的控制时,问题特别严重。在本文中,我们提出了一种直观但有效的自我监督方法来训练3D形变形自动化器(VAE),鼓励身份特征的解开潜在表示。通过在不同形状上交换任意特征来造成迷你批处理允许定义利用潜在表示中已知差异和相似性的损耗功能。在3D网眼上进行的实验结果表明,最先进的潜在解剖学方法无法解散面部和身体的身份特征。我们所提出的方法适当地解耦了这些特征的产生,同时保持了良好的表示和重建能力。
translated by 谷歌翻译
在考虑混杂变量时估计干预措施的效果是因果推断的关键任务。通常,混杂因素没有观察到,但是我们可以访问大量的非结构化数据(图像,文本),这些数据包含有关缺失混杂因素的有价值的代理信号。本文表明,利用通常被现有算法未使用的非结构化数据提高了因果效应估计的准确性。具体而言,我们引入了深层多模式结构方程,这是一个生成模型,其中混杂因素是潜在变量,非结构化数据是代理变量。该模型支持多个多模式代理(图像,文本)以及缺少数据。我们从经验上证明了基因组学和医疗保健的任务,我们的方法纠正了使用非结构化输入混淆,从而有可能使用以前在因果推理中不使用的大量数据。
translated by 谷歌翻译
计算流体动力学(CFD)可用于模拟血管血流动力学并分析潜在的治疗方案。 CFD已显示对改善患者预后有益。但是,尚未实现CFD的实施CFD。 CFD的障碍包括高计算资源,设计模拟设置所需的专业经验以及较长的处理时间。这项研究的目的是探索使用机器学习(ML)以自动和快速回归模型复制常规主动脉CFD。用于训练/测试的数据该模型由在合成生成的3D主动脉形状上执行的3,000个CFD模拟组成。这些受试者是由基于实际患者特异性主动脉(n = 67)的统计形状模型(SSM)生成的。对200个测试形状进行的推理导致压力和速度的平均误差分别为6.01%+/- 3.12 SD和3.99%+/- 0.93 SD。我们的基于ML的模型在〜0.075秒内执行CFD(比求解器快4,000倍)。这项研究表明,可以使用ML以更快的速度,自动过程和高精度来复制常规血管CFD的结果。
translated by 谷歌翻译
学习分离旨在寻找低维表示,该表示由观察数据的多个解释性和生成因素组成。变异自动编码器(VAE)的框架通常用于将独立因素从观察中解散。但是,在实际情况下,具有语义的因素不一定是独立的。取而代之的是,可能存在基本的因果结构,从而使这些因素取决于这些因素。因此,我们提出了一个名为Causalvae的新的基于VAE的框架,该框架包括一个因果层,将独立的外源性因子转化为因果内源性因素,这些因子与数据中的因果关系相关概念相对应。我们进一步分析了模型,表明从观测值中学到的拟议模型可以在一定程度上恢复真实的模型。实验是在各种数据集上进行的,包括合成和真实的基准Celeba。结果表明,因果关系学到的因果表示是可以解释的,并且其因果关系作为定向无环形图(DAG)的因果关系良好地鉴定出来。此外,我们证明了所提出的Causalvae模型能够通过因果因素的“操作”来生成反事实数据。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs-capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the CCVAE, a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints.
translated by 谷歌翻译
通过突出显示为决定贡献最大的输入图像的区域,显着性图已成为使神经网络解释的流行方法。在医学成像中,它们特别适合于在异常定位的背景下解释神经网络。然而,从我们的实验中,它们不太适用于分类问题,其中允许区分不同类别的特征在空间上相关,散射和绝对是非微不足道的。在本文中,我们提出了一种新的范例,以获得更好的可解释性。为此,我们向用户提供相关且易于解释的信息,以便他可以形成自己的意见。我们使用Disentangled的变分自动编码器,潜在表示分为两个组成部分:不可解释的部分和解剖部件。后者占了明确表示不同类别的分类变量。除了提供给定输入样本的类之外,这种模型还通过修改潜在表示中的分类变量的值来改变对另一个类的样本来将样本转换为另一类的样本。这铺平了更容易解释阶级差异的方式。我们说明了这种方法在法医学中髋部骨骼的自动性测定背景下的相关性。模型编码的功能,发现不同类别的功能与专家知识一致。
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
潜在世界模型使代理商可以对具有高维度观察的复杂环境进行推理。但是,适应新环境并有效利用先前的知识仍然是重大挑战。我们提出了变异因果动力学(VCD),这是一种结构化的世界模型,可利用跨环境的因果机制的不变性,以实现快速和模块化的适应性。通过因果分解过渡模型,VCD能够识别在不同环境中可重复使用的组件。这是通过结合因果发现和变异推断来以无监督方式共同学习潜在表示和过渡模型来实现的。具体而言,我们在表示模型和作为因果图形模型结构的过渡模型上优化了较低限制的证据。在对具有状态和图像观察的模拟环境的评估中,我们表明VCD能够成功识别因果变量,并在不同环境中发现一致的因果结构。此外,鉴于在以前看不见的中间环境中进行了少量观察,VCD能够识别动力学的稀疏变化并有效地适应。在此过程中,VCD显着扩展了潜在世界模型中当前最新的功能,同时在预测准确性方面也可以进行比较。
translated by 谷歌翻译
因果表示学习是识别基本因果变量及其从高维观察(例如图像)中的关系的任务。最近的工作表明,可以从观测的时间序列中重建因果变量,假设它们之间没有瞬时因果关系。但是,在实际应用中,我们的测量或帧速率可能比许多因果效应要慢。这有效地产生了“瞬时”效果,并使以前的可识别性结果无效。为了解决这个问题,我们提出了ICITRI,这是一种因果表示学习方法,当具有已知干预目标的完美干预措施时,可以在时间序列中处理瞬时效应。 Icitris从时间观察中识别因果因素,同时使用可区分的因果发现方法来学习其因果图。在三个视频数据集的实验中,Icitris准确地识别了因果因素及其因果图。
translated by 谷歌翻译
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?". The comparison of this instance before and after perturbation can enhance human interpretation. Most existing studies on counterfactual explanations are limited in tabular data or image data. In this work, we study the problem of counterfactual explanation generation on graphs. A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed: 1) optimizing in the discrete and disorganized space of graphs; 2) generalizing on unseen graphs; and 3) maintaining the causality in the generated counterfactuals without prior knowledge of the causal model. To tackle these challenges, we propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models. Specifically, CLEAR leverages a graph variational autoencoder based mechanism to facilitate its optimization and generalization, and promotes causality by leveraging an auxiliary variable to better identify the underlying causal model. Extensive experiments on both synthetic and real-world graphs validate the superiority of CLEAR over the state-of-the-art methods in different aspects.
translated by 谷歌翻译
决策者需要在采用新的治疗政策之前预测结果的发展,该政策定义了何时以及如何连续地影响结果的治疗序列。通常,预测介入的未来结果轨迹的算法将未来治疗的固定顺序作为输入。这要么忽略了未来治疗对结果之前的结果的依赖性,要么隐含地假设已知治疗政策,因此排除了该政策未知或需要反事实分析的情况。为了应对这些局限性,我们开发了一种用于治疗和结果的联合模型,该模型允许估计处理策略和顺序治疗(OUT COMECTION数据)的影响。它可以回答有关治疗政策干预措施的介入和反事实查询,因为我们使用有关血糖进展的现实数据显示,并在此基础上进行了模拟研究。
translated by 谷歌翻译
训练因果效果变分性自身摩托(CEVAE)以预测给定的观察治疗数据的结果,而使用重要性采样均匀的处理分布训练均匀治疗变分性自身培训(UTVAE)。在本文中,我们表明,通过减轻训练训练以测试时间发生的分布换档,使用对观察治疗分布的均匀处理导致更好的因果化推断。我们还探讨了统一和观察治疗分布的组合,推断和生成网络培训目标,以找到更好的培训程序,用于推断治疗效果。实验,我们发现所提出的Utvae在综合效应误差估计比Sycleiny和IHDP数据集上的CEVAE估计的估计是更好的绝对平均处理效果误差和精度。
translated by 谷歌翻译
本文介绍了一种具有层次结构的基于流的模型的新方法。所提出的框架被命名为变分流图形(VFG)模型。 VFG通过通过变异推理集成基于流的功能,通过消息通话方案来学习高维数据的表示。通过利用神经网络的表达能力,VFGS使用较低的维度产生数据的表示,从而克服了许多基于流动的模型的缺点,通常需要具有许多涉及许多琐事变量的高维度空间。在VFG模型中介绍了聚合节点,以通过消息传递方案集成前回溯分层信息。最大化数据可能性的证据下限(ELBO)在每个聚合节点中的向前和向后消息都能使一个一致性节点状态对齐。已经开发了算法来通过有关ELBO目标的梯度更新来学习模型参数。聚集节点的一致性使VFGS适用于图形结构的可牵引性推断。除了表示学习和数值推断外,VFG还提供了一种在具有图形潜在结构的数据集上分发建模的新方法。此外,理论研究表明,通过利用隐式可逆基于流动的结构,VFG是通用近似值。凭借灵活的图形结构和出色的过度功率,VFG可以可能用于改善概率推断。在实验中,VFGS在多个数据集上实现了改进的证据下限(ELBO)和似然值。
translated by 谷歌翻译
Bayesian causal structure learning aims to learn a posterior distribution over directed acyclic graphs (DAGs), and the mechanisms that define the relationship between parent and child variables. By taking a Bayesian approach, it is possible to reason about the uncertainty of the causal model. The notion of modelling the uncertainty over models is particularly crucial for causal structure learning since the model could be unidentifiable when given only a finite amount of observational data. In this paper, we introduce a novel method to jointly learn the structure and mechanisms of the causal model using Variational Bayes, which we call Variational Bayes-DAG-GFlowNet (VBG). We extend the method of Bayesian causal structure learning using GFlowNets to learn not only the posterior distribution over the structure, but also the parameters of a linear-Gaussian model. Our results on simulated data suggest that VBG is competitive against several baselines in modelling the posterior over DAGs and mechanisms, while offering several advantages over existing methods, including the guarantee to sample acyclic graphs, and the flexibility to generalize to non-linear causal mechanisms.
translated by 谷歌翻译