仪器变量(IVS),治疗随机化的来源,条件无关的结果,在因果推理中发挥着重要作用,不观察到的混乱。然而,现有的基于IV的反事实预测方法需要良好预定义的IVS,而它是一种艺术而不是科学,可以在许多现实世界场景中找到有效的IV。此外,通过违反有效IVS的条件,预定的手工制作的IV可能是弱或错误的。这些棘手的事实阻碍了基于IV的反事实预测方法的应用。在本文中,我们提出了一种新颖的自动仪器可变分解(AUTOV)算法,以自动生成从观察到的变量(IV候选)的IVS角色的表示。具体地,我们让学习的IV表示通过相互信息最大化和最小化限制的结果,通过互动和排除条件满足相关性条件。我们还通过鼓励他们与治疗和结果相关,学习混乱的陈述。 IV和混淆器表示竞争其在对抗性游戏中的限制的信息,这使我们能够获得基于IV的反事实预测的有效IV表示。广泛的实验表明,我们的方法为基于准确的IV的反事实预测生成有效的IV表示。
translated by 谷歌翻译
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at https://github.com/causal-machine-learning-lab/mliv.
translated by 谷歌翻译
在存在未衡量的混杂因素的情况下,我们解决了数据融合的治疗效应估计问题,即在不同的治疗分配机制下收集的多个数据集。例如,营销人员可以在不同时间/地点为相同产品分配不同的广告策略。为了处理由未衡量的混杂因素和数据融合引起的偏见,我们建议将观察数据分为多组(每个组具有独立治疗分配机制),然后将组指标显式地模拟为潜在的组仪器变量(LATGIV),将其模拟为实施基于IV的回归。在本文中,我们概念化了这种思想,并开发了一个统一的框架,以(1)估计跨群体观察到的变量的分布差异; (2)对不同治疗分配机制的LATGIV模型; (3)插入latgivs以估计治疗响应函数。经验结果证明了与最新方法相比,LATGIV的优势。
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
Although understanding and characterizing causal effects have become essential in observational studies, it is challenging when the confounders are high-dimensional. In this article, we develop a general framework $\textit{CausalEGM}$ for estimating causal effects by encoding generative modeling, which can be applied in both binary and continuous treatment settings. Under the potential outcome framework with unconfoundedness, we establish a bidirectional transformation between the high-dimensional confounders space and a low-dimensional latent space where the density is known (e.g., multivariate normal distribution). Through this, CausalEGM simultaneously decouples the dependencies of confounders on both treatment and outcome and maps the confounders to the low-dimensional latent space. By conditioning on the low-dimensional latent features, CausalEGM can estimate the causal effect for each individual or the average causal effect within a population. Our theoretical analysis shows that the excess risk for CausalEGM can be bounded through empirical process theory. Under an assumption on encoder-decoder networks, the consistency of the estimate can be guaranteed. In a series of experiments, CausalEGM demonstrates superior performance over existing methods for both binary and continuous treatments. Specifically, we find CausalEGM to be substantially more powerful than competing methods in the presence of large sample sizes and high dimensional confounders. The software of CausalEGM is freely available at https://github.com/SUwonglab/CausalEGM.
translated by 谷歌翻译
The instrumental variable (IV) approach is a widely used way to estimate the causal effects of a treatment on an outcome of interest from observational data with latent confounders. A standard IV is expected to be related to the treatment variable and independent of all other variables in the system. However, it is challenging to search for a standard IV from data directly due to the strict conditions. The conditional IV (CIV) method has been proposed to allow a variable to be an instrument conditioning on a set of variables, allowing a wider choice of possible IVs and enabling broader practical applications of the IV approach. Nevertheless, there is not a data-driven method to discover a CIV and its conditioning set directly from data. To fill this gap, in this paper, we propose to learn the representations of the information of a CIV and its conditioning set from data with latent confounders for average causal effect estimation. By taking advantage of deep generative models, we develop a novel data-driven approach for simultaneously learning the representation of a CIV from measured variables and generating the representation of its conditioning set given measured variables. Extensive experiments on synthetic and real-world datasets show that our method outperforms the existing IV methods.
translated by 谷歌翻译
因果关系的概念在人类认知中起着重要作用。在过去的几十年中,在许多领域(例如计算机科学,医学,经济学和教育)中,因果推论已经得到很好的发展。随着深度学习技术的发展,它越来越多地用于针对反事实数据的因果推断。通常,深层因果模型将协变量的特征映射到表示空间,然后设计各种客观优化函数,以根据不同的优化方法公正地估算反事实数据。本文重点介绍了深层因果模型的调查,其核心贡献如下:1)我们在多种疗法和连续剂量治疗下提供相关指标; 2)我们从时间开发和方法分类的角度综合了深层因果模型的全面概述; 3)我们协助有关相关数据集和源代码的详细且全面的分类和分析。
translated by 谷歌翻译
因果推断是在采用干预时估计因果关系中的因果效应。确切地说,在具有二进制干预措施的因果模型中,即控制和治疗,因果效应仅仅是事实和反事实之间的差异。困难是必须估算反事实,因此因果效应只能是估计。估计反事实的主要挑战是确定影响结果和治疗的混杂因素。一种典型的方法是将因果推论作为监督学习问题,因此可以预测反事实。包括线性回归和深度学习模型,最近的机器学习方法已适应因果推断。在本文中,我们提出了一种通过使用变分信息瓶颈(CEVIB)来估计因果效应的方法。有希望的点是,VIB能够自然地将变量从数据中蒸馏出来,从而可以通过使用观察数据来估计因果效应。我们通过将CEVIB应用于三个数据集,表明我们的方法实现了最佳性能,将其应用于其他方法。我们还实验表明了我们方法的鲁棒性。
translated by 谷歌翻译
观察数据中估算单个治疗效果(ITE)在许多领域,例如个性化医学等领域。但是,实际上,治疗分配通常被未观察到的变量混淆,因此引入了偏见。消除偏见的一种补救措施是使用仪器变量(IVS)。此类环境在医学中广泛存在(例如,将合规性用作二进制IV的试验)。在本文中,我们提出了一个新颖的,可靠的机器学习框架,称为MRIV,用于使用二进制IV估算ITES,从而产生无偏见的ITE估计器。与以前的二进制IV的工作不同,我们的框架通过伪结果回归直接估算了ITE。 (1)我们提供了一个理论分析,我们表明我们的框架产生了多重稳定的收敛速率:即使几个滋扰估计器的收敛缓慢,我们的ITE估计器也会达到快速收敛。 (2)我们进一步表明,我们的框架渐近地优于最先进的插件IV方法,以进行ITE估计。 (3)我们以理论结果为基础,并提出了一种使用二进制IVS的ITE估算的定制的,称为MRIV-NET的深度神经网络结构。在各种计算实验中,我们从经验上证明了我们的MRIV-NET实现最先进的性能。据我们所知,我们的MRIV是第一个机器学习框架,用于估算显示出倍增功能的二进制IV设置。
translated by 谷歌翻译
This invited review discusses causal learning in the context of robotic intelligence. The paper introduced the psychological findings on causal learning in human cognition, then it introduced the traditional statistical solutions on causal discovery and causal inference. The paper reviewed recent deep causal learning algorithms with a focus on their architectures and the benefits of using deep nets and discussed the gap between deep causal learning and the needs of robotic intelligence.
translated by 谷歌翻译
作为因果推断中的重要问题,我们讨论了治疗效果(TES)的估计。代表混淆器作为潜在的变量,我们提出了完整的VAE,这是一个变形AutoEncoder(VAE)的新变种,其具有足以识别TES的预后分数的动机。我们的VAE也自然地提供了使用其之前用于治疗组的陈述。(半)合成数据集的实验显示在各种环境下的最先进的性能,包括不观察到的混淆。基于我们模型的可识别性,我们在不协调下证明TES的识别,并讨论(可能)扩展到更难的设置。
translated by 谷歌翻译
因果效应估计对于自然和社会科学中的许多任务很重要。但是,如果没有做出强大的,通常无法测试的假设,就无法从观察数据中识别效果。我们考虑了部分识别问题的算法,当未衡量的混淆使鉴定不可能鉴定时,多变量,连续处理的界限治疗效果。我们考虑一个框架,即可观察的证据与基于规范标准在因果模型中编码的约束的含义相匹配。这纯粹是基于生成模型来概括经典方法。将因果关系施放为在受约束优化问题中的目标函数,我们将灵活的学习算法与蒙特卡洛方法相结合,以随机因果节目的名义实施解决方案家族。特别是,我们提出了可以通过因果或观察到的数据模型而没有可能性功能的参数功能的这种约束优化问题的方式,从而降低了任务的计算和统计复杂性。
translated by 谷歌翻译
我们考虑在有条件的力矩限制下学习因果关系。与无条件力矩限制下的因果推断不同,有条件的力矩限制对因果推断构成了严重的挑战,尤其是在高维环境中。为了解决这个问题,我们提出了一种方法,该方法使用条件密度比估计器将有条件的力矩限制通过重要性加权转换为无条件的力矩限制。使用这种转换,我们成功估计了条件矩限制下定义的非参数功能。我们提出的框架是一般的,可以应用于包括神经网络在内的广泛方法。我们分析估计误差,为我们提出的方法提供理论支持。在实验中,我们确认了我们提出的方法的健全性。
translated by 谷歌翻译
学习有意义的数据表示,可以解决诸如批处理效应校正和反事实推断之类的挑战,这在包括计算生物学在内的许多领域中都是一个核心问题。采用有条件的VAE框架,我们表明表示和条件变量之间的边际独立性在这两个挑战中都起着关键作用。我们提出了后代方法的对比混合物(COMP)方法,该方法使用了根据变异后代的混合物定义的新型未对准惩罚,以在潜在空间中实现这种独立性。我们表明,与以前的方法相比,COMP具有有吸引力的理论特性,并且在其他假设下,我们证明了COMP的反事实可识别性。我们在一系列具有挑战性的任务上展示了最先进的表现,包括将人类肿瘤样品与癌细胞线对准,预测转录组级的扰动反应以及单细胞RNA测序数据的批次校正。我们还发现与公平代表学习的相似之处,并证明Comp在该领域的共同任务上具有竞争力。
translated by 谷歌翻译
我们研究了在反倾向得分加权的框架内使用连续处理的观察性因果推断的问题。为了获得稳定的权重,我们设计了一种基于熵平衡的新算法,该算法可以学习权重,以直接使用端到端优化最大化因果推理精度。在优化过程中,这些权重自动调整为使用的特定数据集和正在使用的因果推理算法。我们提供了证明我们方法一致性的理论分析。使用合成和现实世界数据,我们表明我们的算法估计因果效应比基线熵平衡更准确。
translated by 谷歌翻译
估算高维观测数据的个性化治疗效果在实验设计不可行,不道德或昂贵的情况下是必不可少的。现有方法依赖于拟合对治疗和控制人群的结果的深层模型。然而,当测量单独的结果是昂贵的时,就像肿瘤活检一样,需要一种用于获取每种结果的样本有效的策略。深度贝叶斯主动学习通过选择具有高不确定性的点来提供高效数据采集的框架。然而,现有方法偏置训练数据获取对处理和控制群体之间的非重叠支持区域。这些不是样本效率,因为在这些区域中不可识别治疗效果。我们介绍了因果关系,贝叶斯采集函数接地的信息理论,使数据采集朝向具有重叠支持的地区,以最大限度地提高学习个性化治疗效果的采样效率。我们展示了拟议的综合和半合成数据集IHDP和CMNIST上提出的收购策略及其扩展的表现,旨在模拟常见的数据集偏差和病理学。
translated by 谷歌翻译
因果推断能够估计治疗效果(即,治疗结果的因果效果),使各个领域的决策受益。本研究中的一个基本挑战是观察数据的治疗偏见。为了提高对因果推断的观察研究的有效性,基于代表的方法作为最先进的方法表明了治疗效果估计的卓越性能。基于大多数基于表示的方法假设所有观察到的协变量都是预处理的(即,不受治疗影响的影响),并学习这些观察到的协变量的平衡表示,以估算治疗效果。不幸的是,这种假设往往在实践中往往是太严格的要求,因为一些协调因子是通过对治疗的干预进行改变(即,后治疗)来改变。相比之下,从不变的协变量中学到的平衡表示因此偏置治疗效果估计。
translated by 谷歌翻译
Determining causal effects of temporal multi-intervention assists decision-making. Restricted by time-varying bias, selection bias, and interactions of multiple interventions, the disentanglement and estimation of multiple treatment effects from individual temporal data is still rare. To tackle these challenges, we propose a comprehensive framework of temporal counterfactual forecasting from an individual multiple treatment perspective (TCFimt). TCFimt constructs adversarial tasks in a seq2seq framework to alleviate selection and time-varying bias and designs a contrastive learning-based block to decouple a mixed treatment effect into separated main treatment effects and causal interactions which further improves estimation accuracy. Through implementing experiments on two real-world datasets from distinct fields, the proposed method shows satisfactory performance in predicting future outcomes with specific treatments and in choosing optimal treatment type and timing than state-of-the-art methods.
translated by 谷歌翻译
在许多现实世界应用中,例如市场和医学,基于短期替代物的长期因果影响是一个重大但具有挑战性的问题。尽管在某些领域取得了成功,但大多数现有方法以理想主义和简单的方式估算了因果影响 - 忽略了短期结果之间的因果结构,而将所有这些因果关系视为代孕。但是,这种方法不能很好地应用于现实世界中,其中部分观察到的替代物与短期结局中的代理混合在一起。为此,我们开发了灵活的方法激光器,以估计在更现实的情况下观察或观察到代理的更现实的情况。 (ivae)在所有候选者上恢复所有有效的替代物,而无需区分观察到的替代物或潜在代理人的代理。在回收的替代物的帮助下,我们进一步设计了对长期因果影响的公正估计。关于现实世界和半合成数据集的广泛实验结果证明了我们提出的方法的有效性。
translated by 谷歌翻译
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译