Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.
translated by 谷歌翻译
在广泛的任务中,在包括医疗处理,广告和营销和政策制定的发​​展中,对观测数据进行因果推断非常有用。使用观察数据进行因果推断有两种重大挑战:治疗分配异质性(\ Texit {IE},治疗和未经处理的群体之间的差异),并且没有反事实数据(\ TEXTIT {IE},不知道是什么已经发生了,如果确实得到治疗的人,反而尚未得到治疗)。通过组合结构化推论和有针对性的学习来解决这两个挑战。在结构方面,我们将联合分布分解为风险,混淆,仪器和杂项因素,以及在目标学习方面,我们应用来自影响曲线的规则器,以减少残余偏差。进行了一项消融研究,对基准数据集进行评估表明,TVAE具有竞争力和最先进的艺术表现。
translated by 谷歌翻译
因果推理中的一个重要问题是分解治疗结果对不同因果途径的总效果,并量化每种途径中的因果效果。例如,在因果公平中,作为男性雇员的总效果(即治疗)构成了对年收入(即,结果)的直接影响,并通过员工的职业(即调解人)和间接效应。因果调解分析(CMA)是一个正式的统计框架,用于揭示这种潜在的因果机制。 CMA在观察研究中的一个主要挑战正在处理混淆,导致治疗,调解员和结果之间导致虚假因果关系的变量。常规方法假设暗示可以测量所有混血器的顺序忽略性,这在实践中通常是不可核法的。这项工作旨在规避严格的顺序忽略性假设,并考虑隐藏的混杂。借鉴代理策略和深度学习的最新进展,我们建议同时揭示特征隐藏混杂物的潜在变量,并估计因果效应。使用合成和半合成数据集的经验评估验证了所提出的方法的有效性。我们进一步展示了我们对因果公平分析的方法的潜力。
translated by 谷歌翻译
训练因果效果变分性自身摩托(CEVAE)以预测给定的观察治疗数据的结果,而使用重要性采样均匀的处理分布训练均匀治疗变分性自身培训(UTVAE)。在本文中,我们表明,通过减轻训练训练以测试时间发生的分布换档,使用对观察治疗分布的均匀处理导致更好的因果化推断。我们还探讨了统一和观察治疗分布的组合,推断和生成网络培训目标,以找到更好的培训程序,用于推断治疗效果。实验,我们发现所提出的Utvae在综合效应误差估计比Sycleiny和IHDP数据集上的CEVAE估计的估计是更好的绝对平均处理效果误差和精度。
translated by 谷歌翻译
因果推断是在采用干预时估计因果关系中的因果效应。确切地说,在具有二进制干预措施的因果模型中,即控制和治疗,因果效应仅仅是事实和反事实之间的差异。困难是必须估算反事实,因此因果效应只能是估计。估计反事实的主要挑战是确定影响结果和治疗的混杂因素。一种典型的方法是将因果推论作为监督学习问题,因此可以预测反事实。包括线性回归和深度学习模型,最近的机器学习方法已适应因果推断。在本文中,我们提出了一种通过使用变分信息瓶颈(CEVIB)来估计因果效应的方法。有希望的点是,VIB能够自然地将变量从数据中蒸馏出来,从而可以通过使用观察数据来估计因果效应。我们通过将CEVIB应用于三个数据集,表明我们的方法实现了最佳性能,将其应用于其他方法。我们还实验表明了我们方法的鲁棒性。
translated by 谷歌翻译
因果推断能够估计治疗效果(即,治疗结果的因果效果),使各个领域的决策受益。本研究中的一个基本挑战是观察数据的治疗偏见。为了提高对因果推断的观察研究的有效性,基于代表的方法作为最先进的方法表明了治疗效果估计的卓越性能。基于大多数基于表示的方法假设所有观察到的协变量都是预处理的(即,不受治疗影响的影响),并学习这些观察到的协变量的平衡表示,以估算治疗效果。不幸的是,这种假设往往在实践中往往是太严格的要求,因为一些协调因子是通过对治疗的干预进行改变(即,后治疗)来改变。相比之下,从不变的协变量中学到的平衡表示因此偏置治疗效果估计。
translated by 谷歌翻译
在考虑混杂变量时估计干预措施的效果是因果推断的关键任务。通常,混杂因素没有观察到,但是我们可以访问大量的非结构化数据(图像,文本),这些数据包含有关缺失混杂因素的有价值的代理信号。本文表明,利用通常被现有算法未使用的非结构化数据提高了因果效应估计的准确性。具体而言,我们引入了深层多模式结构方程,这是一个生成模型,其中混杂因素是潜在变量,非结构化数据是代理变量。该模型支持多个多模式代理(图像,文本)以及缺少数据。我们从经验上证明了基因组学和医疗保健的任务,我们的方法纠正了使用非结构化输入混淆,从而有可能使用以前在因果推理中不使用的大量数据。
translated by 谷歌翻译
作为因果推断中的重要问题,我们讨论了治疗效果(TES)的估计。代表混淆器作为潜在的变量,我们提出了完整的VAE,这是一个变形AutoEncoder(VAE)的新变种,其具有足以识别TES的预后分数的动机。我们的VAE也自然地提供了使用其之前用于治疗组的陈述。(半)合成数据集的实验显示在各种环境下的最先进的性能,包括不观察到的混淆。基于我们模型的可识别性,我们在不协调下证明TES的识别,并讨论(可能)扩展到更难的设置。
translated by 谷歌翻译
由于混杂偏见的复杂情况,使用观察数据估算治疗效果,尤其是个性化治疗效果(ITE),这是具有挑战性的。纵向观察数据估算治疗效果的现有方法通常是基于“不满意”的强烈假设,在现实世界实践中很难实现。在本文中,我们提出了变异的时间变形器(VTD),这种方法使用代理(即用于无法观察到的变量)来利用纵向设置中深层嵌入的方法。具体而言,VTD利用观察到的代理学习隐藏的嵌入,以反映观测数据中真正隐藏的混杂因素。因此,我们的VTD方法不依赖“不符”假设。我们在合成和实际临床数据上测试了VTD方法,结果表明,与其他现有模型相比,隐藏混杂性是主要偏见时我们的方法有效。
translated by 谷歌翻译
因果关系的概念在人类认知中起着重要作用。在过去的几十年中,在许多领域(例如计算机科学,医学,经济学和教育)中,因果推论已经得到很好的发展。随着深度学习技术的发展,它越来越多地用于针对反事实数据的因果推断。通常,深层因果模型将协变量的特征映射到表示空间,然后设计各种客观优化函数,以根据不同的优化方法公正地估算反事实数据。本文重点介绍了深层因果模型的调查,其核心贡献如下:1)我们在多种疗法和连续剂量治疗下提供相关指标; 2)我们从时间开发和方法分类的角度综合了深层因果模型的全面概述; 3)我们协助有关相关数据集和源代码的详细且全面的分类和分析。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a "balanced" representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at https://github.com/causal-machine-learning-lab/mliv.
translated by 谷歌翻译
在许多现实世界应用中,例如市场和医学,基于短期替代物的长期因果影响是一个重大但具有挑战性的问题。尽管在某些领域取得了成功,但大多数现有方法以理想主义和简单的方式估算了因果影响 - 忽略了短期结果之间的因果结构,而将所有这些因果关系视为代孕。但是,这种方法不能很好地应用于现实世界中,其中部分观察到的替代物与短期结局中的代理混合在一起。为此,我们开发了灵活的方法激光器,以估计在更现实的情况下观察或观察到代理的更现实的情况。 (ivae)在所有候选者上恢复所有有效的替代物,而无需区分观察到的替代物或潜在代理人的代理。在回收的替代物的帮助下,我们进一步设计了对长期因果影响的公正估计。关于现实世界和半合成数据集的广泛实验结果证明了我们提出的方法的有效性。
translated by 谷歌翻译
Although understanding and characterizing causal effects have become essential in observational studies, it is challenging when the confounders are high-dimensional. In this article, we develop a general framework $\textit{CausalEGM}$ for estimating causal effects by encoding generative modeling, which can be applied in both binary and continuous treatment settings. Under the potential outcome framework with unconfoundedness, we establish a bidirectional transformation between the high-dimensional confounders space and a low-dimensional latent space where the density is known (e.g., multivariate normal distribution). Through this, CausalEGM simultaneously decouples the dependencies of confounders on both treatment and outcome and maps the confounders to the low-dimensional latent space. By conditioning on the low-dimensional latent features, CausalEGM can estimate the causal effect for each individual or the average causal effect within a population. Our theoretical analysis shows that the excess risk for CausalEGM can be bounded through empirical process theory. Under an assumption on encoder-decoder networks, the consistency of the estimate can be guaranteed. In a series of experiments, CausalEGM demonstrates superior performance over existing methods for both binary and continuous treatments. Specifically, we find CausalEGM to be substantially more powerful than competing methods in the presence of large sample sizes and high dimensional confounders. The software of CausalEGM is freely available at https://github.com/SUwonglab/CausalEGM.
translated by 谷歌翻译
在存在潜在变量的情况下,从观察数据中估算因果关系的效果有时会导致虚假关系,这可能被错误地认为是因果关系。这是许多领域的重要问题,例如金融和气候科学。我们提出了序性因果效应变异自动编码器(SCEVAE),这是一种在隐藏混杂下的时间序列因果关系分析的新方法。它基于CEVAE框架和复发性神经网络。通过基于Pearl的Do-Calculus使用直接因果标准来计算因果链接的混杂变量强度。我们通过将其应用于具有线性和非线性因果链接的合成数据集,以显示SCEVAE的功效。此外,我们将方法应用于真实的气溶胶气候观察数据。我们将我们的方法与在合成数据上有或没有替代混杂因素的时间序列变形方法进行比较。我们证明我们的方法通过将两种方法与地面真理进行比较来表现更好。对于真实数据,我们使用因果链接的专家知识,并显示正确的代理变量的使用如何帮助数据重建。
translated by 谷歌翻译
We propose a new causal inference framework to learn causal effects from multiple, decentralized data sources in a federated setting. We introduce an adaptive transfer algorithm that learns the similarities among the data sources by utilizing Random Fourier Features to disentangle the loss function into multiple components, each of which is associated with a data source. The data sources may have different distributions; the causal effects are independently and systematically incorporated. The proposed method estimates the similarities among the sources through transfer coefficients, and hence requiring no prior information about the similarity measures. The heterogeneous causal effects can be estimated with no sharing of the raw training data among the sources, thus minimizing the risk of privacy leak. We also provide minimax lower bounds to assess the quality of the parameters learned from the disparate sources. The proposed method is empirically shown to outperform the baselines on decentralized data sources with dissimilar distributions.
translated by 谷歌翻译
This invited review discusses causal learning in the context of robotic intelligence. The paper introduced the psychological findings on causal learning in human cognition, then it introduced the traditional statistical solutions on causal discovery and causal inference. The paper reviewed recent deep causal learning algorithms with a focus on their architectures and the benefits of using deep nets and discussed the gap between deep causal learning and the needs of robotic intelligence.
translated by 谷歌翻译
解决公平问题对于安全使用机器学习算法来支持对人们的生活产生关键影响的决策,例如雇用工作,儿童虐待,疾病诊断,贷款授予等。过去十年,例如统计奇偶校验和均衡的赔率。然而,最新的公平概念是基于因果关系的,反映了现在广泛接受的想法,即使用因果关系对于适当解决公平问题是必要的。本文研究了基于因果关系的公平概念的详尽清单,并研究了其在现实情况下的适用性。由于大多数基于因果关系的公平概念都是根据不可观察的数量(例如干预措施和反事实)来定义的,因此它们在实践中的部署需要使用观察数据来计算或估计这些数量。本文提供了有关从观察数据(包括可识别性(Pearl的SCM框架))和估计(潜在结果框架)中推断出因果量的不同方法的全面报告。该调查论文的主要贡献是(1)指南,旨在在特定的现实情况下帮助选择合适的公平概念,以及(2)根据Pearl的因果关系阶梯的公平概念的排名,表明它很难部署。实践中的每个概念。
translated by 谷歌翻译