在科学研究和现实世界应用的许多领域中,非实验数据的因果效应的无偏估计对于理解数据的基础机制以及对有效响应或干预措施的决策至关重要。从不同角度对这个具有挑战性的问题进行了大量研究。对于数据中的因果效应估计,始终做出诸如马尔可夫财产,忠诚和因果关系之类的假设。在假设下,仍然需要一组协变量或基本因果图之类的全部知识。一个实用的挑战是,在许多应用程序中,没有这样的全部知识或只有某些部分知识。近年来,研究已经出现了基于图形因果模型的搜索策略,以从数据中发现有用的知识,以进行因果效应估计,并具有一些温和的假设,并在应对实际挑战方面表现出了诺言。在这项调查中,我们回顾了方法,并关注数据驱动方法所面临的挑战。我们讨论数据驱动方法的假设,优势和局限性。我们希望这篇综述将激励更多的研究人员根据图形因果建模设计更好的数据驱动方法,以解决因果效应估计的具有挑战性的问题。
translated by 谷歌翻译
不观察到的混淆是观测数据的因果效应估计的主要障碍。仪器变量(IVS)广泛用于存在潜在混淆时的因果效应估计。利用标准IV方法,当给定的IV有效时,可以获得无偏估计,但标准IV的有效性要求是严格和不可能的。已经提出了通过调节一组观察变量(称为条件IV的调节装置)来放松标准IV的要求。然而,用于查找条件IV的调节集的标准需要完整的因果结构知识或指向的非循环图(DAG),其代表观察到和未观察的变量的因果关系。这使得无法发现直接从数据设置的调节。在本文中,通过利用潜在变量的因果推断中的最大祖先图(MAGS),我们提出了一种新型的MAG中的IV,祖先IV,并开发了支持给定祖传的调节装置的数据驱动的发现iv在mag。基于该理论,我们在MAG和观测数据中开发了一种与祖先IV的非偏见因果效应估计的算法。与现有IV方法相比,对合成和实际数据集的广泛实验表明了算法的性能。
translated by 谷歌翻译
The instrumental variable (IV) approach is a widely used way to estimate the causal effects of a treatment on an outcome of interest from observational data with latent confounders. A standard IV is expected to be related to the treatment variable and independent of all other variables in the system. However, it is challenging to search for a standard IV from data directly due to the strict conditions. The conditional IV (CIV) method has been proposed to allow a variable to be an instrument conditioning on a set of variables, allowing a wider choice of possible IVs and enabling broader practical applications of the IV approach. Nevertheless, there is not a data-driven method to discover a CIV and its conditioning set directly from data. To fill this gap, in this paper, we propose to learn the representations of the information of a CIV and its conditioning set from data with latent confounders for average causal effect estimation. By taking advantage of deep generative models, we develop a novel data-driven approach for simultaneously learning the representation of a CIV from measured variables and generating the representation of its conditioning set given measured variables. Extensive experiments on synthetic and real-world datasets show that our method outperforms the existing IV methods.
translated by 谷歌翻译
因果关系是理解世界的科学努力的基本组成部分。不幸的是,在心理学和社会科学中,因果关系仍然是禁忌。由于越来越多的建议采用因果方法进行研究的重要性,我们重新制定了心理学研究方法的典型方法,以使不可避免的因果理论与其余的研究渠道协调。我们提出了一个新的过程,该过程始于从因果发现和机器学习的融合中纳入技术的发展,验证和透明的理论形式规范。然后,我们提出将完全指定的理论模型的复杂性降低到与给定目标假设相关的基本子模型中的方法。从这里,我们确定利息量是否可以从数据中估算出来,如果是的,则建议使用半参数机器学习方法来估计因果关系。总体目标是介绍新的研究管道,该管道可以(a)促进与测试因果理论的愿望兼容的科学询问(b)鼓励我们的理论透明代表作为明确的数学对象,(c)将我们的统计模型绑定到我们的统计模型中该理论的特定属性,因此减少了理论到模型间隙通常引起的规范不足问题,以及(d)产生因果关系和可重复性的结果和估计。通过具有现实世界数据的教学示例来证明该过程,我们以摘要和讨论来结论。
translated by 谷歌翻译
在个性化决策中,需要证据来确定诉讼(治疗)是否适合个人。可以通过对亚组中的治疗效应异质性进行建模来获得此类证据。现有的可解释的建模方法采用自上而下的方法来寻找具有异质治疗效果的亚组,它们可能会错过个人最具体和最相关的环境。在本文中,我们设计了\ emph {治疗效果模式(TEP)}来表示数据中的治疗效果异质性。为了实现TEP的可解释呈现,我们使用围绕结果的局部因果结构,以明确说明如何在建模中使用这些重要变量。我们还得出了一个公正估计\ emph {条件平均因果效应(CATE)}的公式,它使用我们的问题设置中的局部结构进行了估计。在发现过程中,我们旨在最大程度地减少以模式表示的每个子组中的异质性。我们提出了一种自下而上的搜索算法,以发现适合个性化决策的最具体情况的最特定模式。实验表明,所提出的方法模型治疗效果的异质性比合成和现实世界数据集中的其他三种基于树的方法更好。
translated by 谷歌翻译
尽管在治疗和结果之间存在未衡量的混杂因素,但前门标准可用于识别和计算因果关系。但是,关键假设 - (i)存在充分介导治疗对结果影响的变量(或一组变量)的存在,(ii)同时并不遭受类似的混淆问题的困扰 - outcome对 - 通常被认为是难以置信的。本文探讨了这些假设的可检验性。我们表明,在涉及辅助变量的轻度条件下,可以通过广义平等约束也可以测试前门模型中编码的假设(以及简单的扩展)。我们基于此观察结果提出了两个合适性测试,并评估我们对真实和合成数据的提议的疗效。我们还将理论和经验比较与仪器可变方法处理未衡量的混杂。
translated by 谷歌翻译
In this review, we discuss approaches for learning causal structure from data, also called causal discovery. In particular, we focus on approaches for learning directed acyclic graphs (DAGs) and various generalizations which allow for some variables to be unobserved in the available data. We devote special attention to two fundamental combinatorial aspects of causal structure learning. First, we discuss the structure of the search space over causal graphs. Second, we discuss the structure of equivalence classes over causal graphs, i.e., sets of graphs which represent what can be learned from observational data alone, and how these equivalence classes can be refined by adding interventional data.
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
估计治疗如何单独影响单位(称为异质治疗效果(HTE)估计)是决策和政策实施的重要组成部分。许多领域中大量数据的积累,例如医疗保健和电子商务,导致人们对开发数据驱动算法的兴趣增加,以估算观察性和实验数据中的异质效应。但是,这些方法通常对观察到的特征做出了强有力的假设,而忽略了基本的因果模型结构,从而导致HTE估计。同时,考虑到现实世界数据的因果结构很少是微不足道的,因为产生数据的因果机制通常是未知的。为了解决此问题,我们开发了一种功能选择方法,该方法考虑了每个功能的估计值,并从数据中学习了因果结构的相关部分。我们提供了有力的经验证据,表明我们的方法改善了在任意基本因果结构下的现有数据驱动的HTE估计方法。我们关于合成,半合成和现实世界数据集的结果表明,我们的特征选择算法导致HTE估计误差较低。
translated by 谷歌翻译
考虑基于AI和ML的决策对这些新兴技术的安全和可接受的使用的决策的社会和道德后果至关重要。公平,特别是保证ML决定不会导致对个人或少数群体的歧视。使用因果关系,可以更好地实现和衡量可靠的公平/歧视,从而更好地实现了敏感属性(例如性别,种族,宗教等)之间的因果关系,仅仅是仅仅是关联,例如性别,种族,宗教等(例如,雇用工作,贷款授予等) )。然而,对因果关系解决公平性的最大障碍是因果模型的不可用(通常表示为因果图)。文献中现有的因果关系方法并不能解决此问题,并假设可获得因果模型。在本文中,我们没有做出这样的假设,并且我们回顾了从可观察数据中发现因果关系的主要算法。这项研究的重点是因果发现及其对公平性的影响。特别是,我们展示了不同的因果发现方法如何导致不同的因果模型,最重要的是,即使因果模型之间的轻微差异如何对公平/歧视结论产生重大影响。通过使用合成和标准公平基准数据集的经验分析来巩固这些结果。这项研究的主要目标是强调因果关系使用因果关系适当解决公平性的因果发现步骤的重要性。
translated by 谷歌翻译
估计平均因果效应的理想回归(如果有)是什么?我们在离散协变量的设置中研究了这个问题,从而得出了各种分层估计器的有限样本方差的表达式。这种方法阐明了许多广泛引用的结果的基本统计现象。我们的博览会结合了研究因果效应估计的三种不同的方法论传统的见解:潜在结果,因果图和具有加性误差的结构模型。
translated by 谷歌翻译
This review presents empirical researchers with recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called "causal effects" or "policy evaluation") (2) queries about probabilities of counterfactuals, (including assessment of "regret," "attribution" or "causes of effects") and (3) queries about direct and indirect effects (also known as "mediation"). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both.
translated by 谷歌翻译
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at https://github.com/causal-machine-learning-lab/mliv.
translated by 谷歌翻译
数据科学任务可以被视为了解数据的感觉或测试关于它的假设。从数据推断的结论可以极大地指导我们做出信息做出决定。大数据使我们能够与机器学习结合执行无数的预测任务,例如鉴定患有某种疾病的高风险患者并采取可预防措施。然而,医疗保健从业者不仅仅是仅仅预测的内容 - 它们也对输入特征和临床结果之间的原因关系感兴趣。了解这些关系将有助于医生治疗患者并有效降低风险。通常通过随机对照试验鉴定因果关系。当科学家和研究人员转向观察研究并试图吸引推论时,这种试验通常是不可行的。然而,观察性研究也可能受到选择和/或混淆偏差的影响,这可能导致错误的因果结论。在本章中,我们将尝试突出传统机器学习和统计方法中可能出现的一些缺点,以分析观察数据,特别是在医疗保健数据分析域中。我们将讨论因果化推理和方法,以发现医疗领域的观测研究原因。此外,我们将展示因果推断在解决某些普通机器学习问题等中的应用,例如缺少数据和模型可运输性。最后,我们将讨论将加强学习与因果关系相结合的可能性,作为反击偏见的一种方式。
translated by 谷歌翻译
从观察数据中推断出因果关系很少直接,但是在高维度中,问题尤其困难。对于这些应用,因果发现算法通常需要参数限制或极端稀疏限制。我们放松这些假设,并专注于一个重要但更专业的问题,即在已知的变量子中恢复因果秩序,这些变量已知会从某些(可能很大的)混杂的协变量(即$ \ textit {Confounder Blanset} $)中降下。这在许多环境中很有用,例如,在研究具有背景信息的遗传数据的动态生物分子子系统时。在一个称为$ \ textit {混杂的毯子原理} $的结构假设下,我们认为这对于在高维度中的可拖动因果发现至关重要,我们的方法可容纳低或高稀疏性的图形,同时保持多项式时间复杂性。我们提出了一种结构学习算法,相对于所谓的$ \ textit {Lazy Oracle} $,该算法是合理且完整的。我们设计了线性和非线性系统有限样本误差控制的推理过程,并在一系列模拟和现实世界数据集上演示了我们的方法。随附的$ \ texttt {r} $ package,$ \ texttt {cbl} $可从$ \ texttt {cran} $获得。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
解决公平问题对于安全使用机器学习算法来支持对人们的生活产生关键影响的决策,例如雇用工作,儿童虐待,疾病诊断,贷款授予等。过去十年,例如统计奇偶校验和均衡的赔率。然而,最新的公平概念是基于因果关系的,反映了现在广泛接受的想法,即使用因果关系对于适当解决公平问题是必要的。本文研究了基于因果关系的公平概念的详尽清单,并研究了其在现实情况下的适用性。由于大多数基于因果关系的公平概念都是根据不可观察的数量(例如干预措施和反事实)来定义的,因此它们在实践中的部署需要使用观察数据来计算或估计这些数量。本文提供了有关从观察数据(包括可识别性(Pearl的SCM框架))和估计(潜在结果框架)中推断出因果量的不同方法的全面报告。该调查论文的主要贡献是(1)指南,旨在在特定的现实情况下帮助选择合适的公平概念,以及(2)根据Pearl的因果关系阶梯的公平概念的排名,表明它很难部署。实践中的每个概念。
translated by 谷歌翻译
发现新药是寻求并证明因果关系。作为一种新兴方法利用人类的知识和创造力,数据和机器智能,因果推论具有减少认知偏见并改善药物发现决策的希望。尽管它已经在整个价值链中应用了,但因子推理的概念和实践对许多从业者来说仍然晦涩难懂。本文提供了有关因果推理的非技术介绍,审查了其最新应用,并讨论了在药物发现和开发中采用因果语言的机会和挑战。
translated by 谷歌翻译
本文研究了通过机器学习模型估计特征对特定实例预测的贡献的问题,以及功能对模型的总体贡献。特征(变量)对预测结果的因果效应反映了该特征对预测的贡献。一个挑战是,如果没有已知的因果图,就无法从数据中估算大多数现有的因果效应。在本文中,我们根据假设的理想实验定义了解释性因果效应。该定义给不可知论的解释带来了一些好处。首先,解释是透明的,具有因果关系。其次,解释性因果效应估计可以数据驱动。第三,因果效应既提供了特定预测的局部解释,又提供了一个全局解释,显示了一个特征在预测模型中的总体重要性。我们进一步提出了一种基于解释性因果效应来解释的方法和组合变量的方法。我们显示了对某些现实世界数据集的实验的定义和方法。
translated by 谷歌翻译
We explore how observational and interventional causal discovery methods can be combined. A state-of-the-art observational causal discovery algorithm for time series capable of handling latent confounders and contemporaneous effects, called LPCMCI, is extended to profit from casual constraints found through randomized control trials. Numerical results show that, given perfect interventional constraints, the reconstructed structural causal models (SCMs) of the extended LPCMCI allow 84.6% of the time for the optimal prediction of the target variable. The implementation of interventional and observational causal discovery is modular, allowing causal constraints from other sources. The second part of this thesis investigates the question of regret minimizing control by simultaneously learning a causal model and planning actions through the causal model. The idea is that an agent to optimize a measured variable first learns the system's mechanics through observational causal discovery. The agent then intervenes on the most promising variable with randomized values allowing for the exploitation and generation of new interventional data. The agent then uses the interventional data to enhance the causal model further, allowing improved actions the next time. The extended LPCMCI can be favorable compared to the original LPCMCI algorithm. The numerical results show that detecting and using interventional constraints leads to reconstructed SCMs that allow 60.9% of the time for the optimal prediction of the target variable in contrast to the baseline of 53.6% when using the original LPCMCI algorithm. Furthermore, the induced average regret decreases from 1.2 when using the original LPCMCI algorithm to 1.0 when using the extended LPCMCI algorithm with interventional discovery.
translated by 谷歌翻译