Estimating treatment effects from observational data is a central problem in causal inference. Methods to solve this problem exploit inductive biases and heuristics from causal inference to design multi-head neural network architectures and regularizers. In this work, we propose to use neurosymbolic program synthesis, a data-efficient, and interpretable technique, to solve the treatment effect estimation problem. We theoretically show that neurosymbolic programming can solve the treatment effect estimation problem. By designing a Domain Specific Language (DSL) for treatment effect estimation problem based on the inductive biases used in literature, we argue that neurosymbolic programming is a better alternative to treatment effect estimation than traditional methods. Our empirical study reveals that our method, which implicitly encodes inductive biases in a DSL, achieves better performance on benchmark datasets than the state-of-the-art methods.
translated by 谷歌翻译
因果推断能够估计治疗效果(即,治疗结果的因果效果),使各个领域的决策受益。本研究中的一个基本挑战是观察数据的治疗偏见。为了提高对因果推断的观察研究的有效性,基于代表的方法作为最先进的方法表明了治疗效果估计的卓越性能。基于大多数基于表示的方法假设所有观察到的协变量都是预处理的(即,不受治疗影响的影响),并学习这些观察到的协变量的平衡表示,以估算治疗效果。不幸的是,这种假设往往在实践中往往是太严格的要求,因为一些协调因子是通过对治疗的干预进行改变(即,后治疗)来改变。相比之下,从不变的协变量中学到的平衡表示因此偏置治疗效果估计。
translated by 谷歌翻译
大型观察数据越来越多地提供健康,经济和社会科学等学科,研究人员对因果问题而不是预测感兴趣。在本文中,从旨在调查参与学校膳食计划对健康指标的实证研究,研究了使用非参数回归的方法估算异质治疗效果的问题。首先,我们介绍了与观察或非完全随机数据进行因果推断相关的设置和相关的问题,以及如何在统计学习工具的帮助下解决这些问题。然后,我们审查并制定现有最先进的框架的统一分类,允许通过非参数回归模型来估算单个治疗效果。在介绍模型选择问题的简要概述后,我们说明了一些关于三种不同模拟研究的方法的性能。我们通过展示一些关于学校膳食计划数据的实证分析的一些方法的使用来结束。
translated by 谷歌翻译
因果关系的概念在人类认知中起着重要作用。在过去的几十年中,在许多领域(例如计算机科学,医学,经济学和教育)中,因果推论已经得到很好的发展。随着深度学习技术的发展,它越来越多地用于针对反事实数据的因果推断。通常,深层因果模型将协变量的特征映射到表示空间,然后设计各种客观优化函数,以根据不同的优化方法公正地估算反事实数据。本文重点介绍了深层因果模型的调查,其核心贡献如下:1)我们在多种疗法和连续剂量治疗下提供相关指标; 2)我们从时间开发和方法分类的角度综合了深层因果模型的全面概述; 3)我们协助有关相关数据集和源代码的详细且全面的分类和分析。
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
传统的因果推理方法利用观察性研究数据来估计潜在治疗的观察到的差异和未观察到的结果,称为条件平均治疗效果(CATE)。然而,凯特就对应于仅第一刻的比较,因此可能不足以反映治疗效果的全部情况。作为替代方案,估计全部潜在结果分布可以提供更多的见解。但是,估计治疗效果的现有方法潜在的结果分布通常对这些分布施加限制性或简单的假设。在这里,我们提出了合作因果网络(CCN),这是一种新颖的方法,它通过学习全部潜在结果分布而超出了CATE的估计。通过CCN框架估算结果分布不需要对基础数据生成过程的限制性假设。此外,CCN促进了每种可能处理的效用的估计,并允许通过效用函数进行特定的特定变异。 CCN不仅将结果估计扩展到传统的风险差异之外,而且还可以通过定义灵活的比较来实现更全面的决策过程。根据因果文献中通常做出的假设,我们表明CCN学习了渐近捕获真正潜在结果分布的分布。此外,我们提出了一种调整方法,该方法在经验上可以有效地减轻观察数据中治疗组之间的样本失衡。最后,我们评估了CCN在多个合成和半合成实验中的性能。我们证明,与现有的贝叶斯和深层生成方法相比,CCN学会了改进的分布估计值,以及对各种效用功能的改进决策。
translated by 谷歌翻译
对于许多具有观察数据的生物医学应用,估计治疗效果至关重要。特别是,对于许多生物医学研究人员来说,可解释性可解释性。在本文中,我们首先提供理论分析,并在强大的无知性假设下获得平均治疗效果(ATE)估计的偏差的上限。通过利用加权能量距离的吸引力性能得出,我们的上限比文献中报道的更紧密。在理论分析的激励下,我们提出了一个新的目标函数,用于估计使用能量距离平衡评分的ATE,因此不需要正确规范倾向得分模型。我们还利用最近开发的神经添加剂模型来改善用于潜在结果预测的深度学习模型的可解释性。我们通过能量距离平衡评分加权正则化进一步增强了我们提出的模型。在半合成实验中,使用两个基准数据集(即IHDP和ACIC)证明了我们提出的模型比当前最新方法的优势。
translated by 谷歌翻译
There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a "balanced" representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.
translated by 谷歌翻译
Observational studies have recently received significant attention from the machine learning community due to the increasingly available non-experimental observational data and the limitations of the experimental studies, such as considerable cost, impracticality, small and less representative sample sizes, etc. In observational studies, de-confounding is a fundamental problem of individualised treatment effects (ITE) estimation. This paper proposes disentangled representations with adversarial training to selectively balance the confounders in the binary treatment setting for the ITE estimation. The adversarial training of treatment policy selectively encourages treatment-agnostic balanced representations for the confounders and helps to estimate the ITE in the observational studies via counterfactual inference. Empirical results on synthetic and real-world datasets, with varying degrees of confounding, prove that our proposed approach improves the state-of-the-art methods in achieving lower error in the ITE estimation.
translated by 谷歌翻译
神经网络利用数据中的因果关系和相关的关系,以学习优化给定性能标准的模型,例如分类准确性。这导致学习模型可能不一定反映输入和输出之间的真实因果关系。当在培训时可获得因果关系的域中,即使在学习优化性能标准时,神经网络模型也将这些关系保持为因果关系。我们提出了一种因果规则化方法,可以将这种因果域前瞻纳入网络,并支持直接和完全因果效应。我们表明这种方法可以推广到各种因果前导者的规范,包括给定输入特征的因果效果的单调性或针对公平的目的去除一定的影响。我们在11个基准数据集上的实验显示了这种方法在规则中规范学习的神经网络模型以保持所需的因果效果。在大多数数据集上,可以在不损害精度的情况下获得域名一致模型。
translated by 谷歌翻译
估计平均因果效应的理想回归(如果有)是什么?我们在离散协变量的设置中研究了这个问题,从而得出了各种分层估计器的有限样本方差的表达式。这种方法阐明了许多广泛引用的结果的基本统计现象。我们的博览会结合了研究因果效应估计的三种不同的方法论传统的见解:潜在结果,因果图和具有加性误差的结构模型。
translated by 谷歌翻译
This invited review discusses causal learning in the context of robotic intelligence. The paper introduced the psychological findings on causal learning in human cognition, then it introduced the traditional statistical solutions on causal discovery and causal inference. The paper reviewed recent deep causal learning algorithms with a focus on their architectures and the benefits of using deep nets and discussed the gap between deep causal learning and the needs of robotic intelligence.
translated by 谷歌翻译
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at https://github.com/causal-machine-learning-lab/mliv.
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
估计治疗如何单独影响单位(称为异质治疗效果(HTE)估计)是决策和政策实施的重要组成部分。许多领域中大量数据的积累,例如医疗保健和电子商务,导致人们对开发数据驱动算法的兴趣增加,以估算观察性和实验数据中的异质效应。但是,这些方法通常对观察到的特征做出了强有力的假设,而忽略了基本的因果模型结构,从而导致HTE估计。同时,考虑到现实世界数据的因果结构很少是微不足道的,因为产生数据的因果机制通常是未知的。为了解决此问题,我们开发了一种功能选择方法,该方法考虑了每个功能的估计值,并从数据中学习了因果结构的相关部分。我们提供了有力的经验证据,表明我们的方法改善了在任意基本因果结构下的现有数据驱动的HTE估计方法。我们关于合成,半合成和现实世界数据集的结果表明,我们的特征选择算法导致HTE估计误差较低。
translated by 谷歌翻译
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.
translated by 谷歌翻译
为目标疾病开发新药物是一项耗时且昂贵的任务,药物重新利用已成为药物开发领域的流行话题。随着许多健康索赔数据可用,已经对数据进行了许多研究。现实世界的数据嘈杂,稀疏,并且具有许多混杂因素。此外,许多研究表明,药物的作用在人群中是异质的。近年来已经出现了许多有关估计异构治疗效果(HTE)(HTE)的高级机器学习模型,并已应用于计量经济学和机器学习社区。这些研究将医学和药物开发视为主要应用领域,但是从HTE方法论到药物开发的转化研究有限。我们旨在将HTE方法介绍到医疗保健领域,并在通过基准实验进行医疗保健行政索赔数据进行基准实验时提供可行性考虑。另外,我们希望使用基准实验来展示如何将模型应用于医疗保健研究时如何解释和评估模型。通过将最近的HTE技术引入生物医学信息学社区的广泛读者,我们希望通过机器学习促进广泛采用因果推断。我们还希望提供HTE具有个性化药物有效性的可行性。
translated by 谷歌翻译
作为因果推断中的重要问题,我们讨论了治疗效果(TES)的估计。代表混淆器作为潜在的变量,我们提出了完整的VAE,这是一个变形AutoEncoder(VAE)的新变种,其具有足以识别TES的预后分数的动机。我们的VAE也自然地提供了使用其之前用于治疗组的陈述。(半)合成数据集的实验显示在各种环境下的最先进的性能,包括不观察到的混淆。基于我们模型的可识别性,我们在不协调下证明TES的识别,并讨论(可能)扩展到更难的设置。
translated by 谷歌翻译
本文开发了贝叶斯因果林的稀疏诱导版本,最近提出的非参数因果回归模型采用贝叶斯添加剂回归树,专门设计用于使用观察数据来估计异质治疗效果。我们介绍的稀疏诱导组件是通过实证研究的动机,其中不是所有可用的协变量相关的,导致在估计个体治疗效果的兴趣表面底层的不同程度。在这项工作中提供的扩展版本,我们命名贝叶斯因果森林,配备了一对允许模型通过树集合中的相应数量的分裂调节每个协变量的重量。这些前瞻改善了模型对稀疏数据产生过程的适应性,并且允许在治疗效果估计的框架中进行完全贝叶斯特征缩收,从而揭示推动异质性的调节因子。此外,该方法允许先前了解相关的混杂协变量和对模型中掺入结果的影响的相对幅度。我们说明了我们在模拟研究中的方法的表现,与贝叶斯因果林和其他最先进的模型相比,展示如何与越来越多的协变量以及其如何处理强烈混淆的情景。最后,我们还提供了使用真实数据的应用程序的示例。
translated by 谷歌翻译
Although understanding and characterizing causal effects have become essential in observational studies, it is challenging when the confounders are high-dimensional. In this article, we develop a general framework $\textit{CausalEGM}$ for estimating causal effects by encoding generative modeling, which can be applied in both binary and continuous treatment settings. Under the potential outcome framework with unconfoundedness, we establish a bidirectional transformation between the high-dimensional confounders space and a low-dimensional latent space where the density is known (e.g., multivariate normal distribution). Through this, CausalEGM simultaneously decouples the dependencies of confounders on both treatment and outcome and maps the confounders to the low-dimensional latent space. By conditioning on the low-dimensional latent features, CausalEGM can estimate the causal effect for each individual or the average causal effect within a population. Our theoretical analysis shows that the excess risk for CausalEGM can be bounded through empirical process theory. Under an assumption on encoder-decoder networks, the consistency of the estimate can be guaranteed. In a series of experiments, CausalEGM demonstrates superior performance over existing methods for both binary and continuous treatments. Specifically, we find CausalEGM to be substantially more powerful than competing methods in the presence of large sample sizes and high dimensional confounders. The software of CausalEGM is freely available at https://github.com/SUwonglab/CausalEGM.
translated by 谷歌翻译