反事实推断是一种强大的工具,能够解决备受瞩目的领域中具有挑战性的问题。要进行反事实推断,需要了解潜在的因果机制。但是,仅凭观察和干预措施就不能独特地确定因果机制。这就提出了一个问题,即如何选择因果机制,以便在给定领域中值得信赖。在具有二进制变量的因果模型中已经解决了这个问题,但是分类变量的情况仍未得到解答。我们通过为具有分类变量的因果模型引入反事实排序的概念来应对这一挑战。为了学习满足这些约束的因果机制,并对它们进行反事实推断,我们引入了深层双胞胎网络。这些是深层神经网络,在受过训练的情况下,可以进行双网络反事实推断 - 一种替代绑架,动作和预测方法的替代方法。我们从经验上测试了来自医学,流行病学和金融的多种现实世界和半合成数据的方法,并报告了反事实概率的准确估算,同时证明了反事实订购时不执行反事实的问题。
translated by 谷歌翻译
This review presents empirical researchers with recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called "causal effects" or "policy evaluation") (2) queries about probabilities of counterfactuals, (including assessment of "regret," "attribution" or "causes of effects") and (3) queries about direct and indirect effects (also known as "mediation"). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both.
translated by 谷歌翻译
可解释的人工智能(XAI)是一系列技术,可以理解人工智能(AI)系统的技术和非技术方面。 Xai至关重要,帮助满足\ emph {可信赖}人工智能的日益重要的需求,其特点是人类自主,防止危害,透明,问责制等的基本特征,反事实解释旨在提供最终用户需要更改的一组特征(及其对应的值)以实现所需的结果。目前的方法很少考虑到实现建议解释所需的行动的可行性,特别是他们缺乏考虑这些行为的因果影响。在本文中,我们将反事实解释作为潜在空间(CEILS)的干预措施,一种方法来生成由数据从数据设计潜在的因果关系捕获的反事实解释,并且同时提供可行的建议,以便到达所提出的配置文件。此外,我们的方法具有以下优点,即它可以设置在现有的反事实发生器算法之上,从而最小化施加额外的因果约束的复杂性。我们展示了我们使用合成和实际数据集的一组不同实验的方法的有效性(包括金融领域的专有数据集)。
translated by 谷歌翻译
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school. * Equal contribution. This work was done while JL was a Research Fellow at the Alan Turing Institute. 2 https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-dataand-civil-rights 31st Conference on Neural Information Processing Systems (NIPS 2017),
translated by 谷歌翻译
医学图像分析是一个充满活力的研究领域,为医生和医生提供了宝贵的见解以及准确诊断和监测疾病的能力。机器学习为该领域提供了额外的提升。但是,用于医学图像分析的机器学习尤其容易受到自然偏见的影响,例如影响算法性能和鲁棒性的域移位。在本文中,我们在技术准备水平的框架内分析了机器学习,以进行医学图像分析,并回顾因果分析方法在创建健壮且适应性的医学图像分析算法时如何填补空白。我们在医学成像AI/ML中使用因果关系回顾方法,发现因果分析有可能减轻临床翻译的关键问题,但是到目前为止,摄取和临床下游研究受到限制。
translated by 谷歌翻译
最近对DataSet Shift的兴趣,已经产生了许多方法,用于查找新的未经,无奈环境中预测的不变分布。然而,这些方法考虑不同类型的班次,并且已经在不同的框架下开发,从理论上难以分析解决方案如何与稳定性和准确性不同。采取因果图形视图,我们使用灵活的图形表示来表达各种类型的数据集班次。我们表明所有不变的分布对应于图形运算符的因果层次结构,该图形运算符禁用负责班次的图表中的边缘。层次结构提供了一个常见的理论基础,以便理解可以实现转移的何时以及如何实现稳定性,并且在稳定的分布可能不同的情况下。我们使用它来建立跨环境最佳性能的条件,并导出找到最佳稳定分布的新算法。使用这种新的视角,我们经验证明了最低限度和平均性能之间的权衡。
translated by 谷歌翻译
为了在结构因果模型(SCM)中执行反事实推理,需要了解因果机制,它提供条件分布的因子,并将噪声映射到样本的确定性函数。遗憾的是,因象无法通过观察和与世界互动收集的数据唯一确定的因果机制,因此仍然存在如何选择因果机制的问题。最近的工作中,Oberst&Sontag(2019)提出了Gumbel-Max SCM,它由于直观上吸引的反事实稳定性而导致Gumbel-Max Reparameterizations作为因果机制。在这项工作中,我们认为选择在估算反事实治疗效果时最小化的定量标准的因果机制,例如最小化方差。我们提出了一个参数化的因果机制,概括了Gumbel-Max。我们表明他们可以接受培训,以最大限度地减少对感兴趣查询的分布的反事实效果方差和其他损失,从而产生比固定替代方案的反复治疗效果的较低方差估计,也推广到在培训时间未见的查询。
translated by 谷歌翻译
基于AI和机器学习的决策系统已在各种现实世界中都使用,包括医疗保健,执法,教育和金融。不再是牵强的,即设想一个未来,自治系统将推动整个业务决策,并且更广泛地支持大规模决策基础设施以解决社会最具挑战性的问题。当人类做出决定时,不公平和歧视的问题普遍存在,并且当使用几乎没有透明度,问责制和公平性的机器做出决定时(或可能会放大)。在本文中,我们介绍了\ textit {Causal公平分析}的框架,目的是填补此差距,即理解,建模,并可能解决决策设置中的公平性问题。我们方法的主要见解是将观察到数据中存在的差异的量化与基本且通常是未观察到的因果机制收集的因果机制的收集,这些机制首先会产生差异,挑战我们称之为因果公平的基本问题分析(FPCFA)。为了解决FPCFA,我们研究了分解差异和公平性的经验度量的问题,将这种变化归因于结构机制和人群的不同单位。我们的努力最终达到了公平地图,这是组织和解释文献中不同标准之间关系的首次系统尝试。最后,我们研究了进行因果公平分析并提出一本公平食谱的最低因果假设,该假设使数据科学家能够评估不同影响和不同治疗的存在。
translated by 谷歌翻译
数据科学任务可以被视为了解数据的感觉或测试关于它的假设。从数据推断的结论可以极大地指导我们做出信息做出决定。大数据使我们能够与机器学习结合执行无数的预测任务,例如鉴定患有某种疾病的高风险患者并采取可预防措施。然而,医疗保健从业者不仅仅是仅仅预测的内容 - 它们也对输入特征和临床结果之间的原因关系感兴趣。了解这些关系将有助于医生治疗患者并有效降低风险。通常通过随机对照试验鉴定因果关系。当科学家和研究人员转向观察研究并试图吸引推论时,这种试验通常是不可行的。然而,观察性研究也可能受到选择和/或混淆偏差的影响,这可能导致错误的因果结论。在本章中,我们将尝试突出传统机器学习和统计方法中可能出现的一些缺点,以分析观察数据,特别是在医疗保健数据分析域中。我们将讨论因果化推理和方法,以发现医疗领域的观测研究原因。此外,我们将展示因果推断在解决某些普通机器学习问题等中的应用,例如缺少数据和模型可运输性。最后,我们将讨论将加强学习与因果关系相结合的可能性,作为反击偏见的一种方式。
translated by 谷歌翻译
因果关系是理解世界的科学努力的基本组成部分。不幸的是,在心理学和社会科学中,因果关系仍然是禁忌。由于越来越多的建议采用因果方法进行研究的重要性,我们重新制定了心理学研究方法的典型方法,以使不可避免的因果理论与其余的研究渠道协调。我们提出了一个新的过程,该过程始于从因果发现和机器学习的融合中纳入技术的发展,验证和透明的理论形式规范。然后,我们提出将完全指定的理论模型的复杂性降低到与给定目标假设相关的基本子模型中的方法。从这里,我们确定利息量是否可以从数据中估算出来,如果是的,则建议使用半参数机器学习方法来估计因果关系。总体目标是介绍新的研究管道,该管道可以(a)促进与测试因果理论的愿望兼容的科学询问(b)鼓励我们的理论透明代表作为明确的数学对象,(c)将我们的统计模型绑定到我们的统计模型中该理论的特定属性,因此减少了理论到模型间隙通常引起的规范不足问题,以及(d)产生因果关系和可重复性的结果和估计。通过具有现实世界数据的教学示例来证明该过程,我们以摘要和讨论来结论。
translated by 谷歌翻译
解决公平问题对于安全使用机器学习算法来支持对人们的生活产生关键影响的决策,例如雇用工作,儿童虐待,疾病诊断,贷款授予等。过去十年,例如统计奇偶校验和均衡的赔率。然而,最新的公平概念是基于因果关系的,反映了现在广泛接受的想法,即使用因果关系对于适当解决公平问题是必要的。本文研究了基于因果关系的公平概念的详尽清单,并研究了其在现实情况下的适用性。由于大多数基于因果关系的公平概念都是根据不可观察的数量(例如干预措施和反事实)来定义的,因此它们在实践中的部署需要使用观察数据来计算或估计这些数量。本文提供了有关从观察数据(包括可识别性(Pearl的SCM框架))和估计(潜在结果框架)中推断出因果量的不同方法的全面报告。该调查论文的主要贡献是(1)指南,旨在在特定的现实情况下帮助选择合适的公平概念,以及(2)根据Pearl的因果关系阶梯的公平概念的排名,表明它很难部署。实践中的每个概念。
translated by 谷歌翻译
我们基于从多个数据集的合并信息介绍了一种反事实推断的方法。我们考虑了统计边际问题的因果重新重新制定:鉴于边际结构因果模型(SCM)的集合在不同但重叠的变量集上,请确定与边际相反一致的关节SCMS集。我们使用响应函数配方对分类SCM进行了形式化这种方法,并表明它降低了允许的边际和关节SCM的空间。因此,我们的工作通过其他变量突出了一种通过其他变量的新模式,与统计数据相反。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
有因果关系的机器学习框架可以通过回答反事实问题来帮助临床医生确定最佳治疗方法。我们通过研究左心室射血分数的变化来探索超声心动图的情况,这是从这些检查中获得的最重要的临床指标。我们首次结合了深层神经网络,双因果网络和生成的对抗方法,以建立一种新颖的因果生成模型,这是建立D'Artagnan(深人造双胞胎生成网络)。在将其应用于心脏超声视频之前,我们在合成数据集上证明了我们的方法的合理性,以回答以下问题:“如果患者的射血分数不同,则超声心动图会怎样?”。为此,我们生成了新的超声视频,保留了原始患者的视频样式和解剖学,同时修改了以给定输入为条件的射血分数。我们在反事实视频中获得0.79的SSIM分数为0.79,R2得分为0.51。代码和型号可在以下网址提供:https://github.com/hreynaud/dartagnan。
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
发现新药是寻求并证明因果关系。作为一种新兴方法利用人类的知识和创造力,数据和机器智能,因果推论具有减少认知偏见并改善药物发现决策的希望。尽管它已经在整个价值链中应用了,但因子推理的概念和实践对许多从业者来说仍然晦涩难懂。本文提供了有关因果推理的非技术介绍,审查了其最新应用,并讨论了在药物发现和开发中采用因果语言的机会和挑战。
translated by 谷歌翻译
估计平均因果效应的理想回归(如果有)是什么?我们在离散协变量的设置中研究了这个问题,从而得出了各种分层估计器的有限样本方差的表达式。这种方法阐明了许多广泛引用的结果的基本统计现象。我们的博览会结合了研究因果效应估计的三种不同的方法论传统的见解:潜在结果,因果图和具有加性误差的结构模型。
translated by 谷歌翻译
A significant body of research in the data sciences considers unfair discrimination against social categories such as race or gender that could occur or be amplified as a result of algorithmic decisions. Simultaneously, real-world disparities continue to exist, even before algorithmic decisions are made. In this work, we draw on insights from the social sciences brought into the realm of causal modeling and constrained optimization, and develop a novel algorithmic framework for tackling pre-existing real-world disparities. The purpose of our framework, which we call the "impact remediation framework," is to measure real-world disparities and discover the optimal intervention policies that could help improve equity or access to opportunity for those who are underserved with respect to an outcome of interest. We develop a disaggregated approach to tackling pre-existing disparities that relaxes the typical set of assumptions required for the use of social categories in structural causal models. Our approach flexibly incorporates counterfactuals and is compatible with various ontological assumptions about the nature of social categories. We demonstrate impact remediation with a hypothetical case study and compare our disaggregated approach to an existing state-of-the-art approach, comparing its structure and resulting policy recommendations. In contrast to most work on optimal policy learning, we explore disparity reduction itself as an objective, explicitly focusing the power of algorithms on reducing inequality.
translated by 谷歌翻译
In this review, we discuss approaches for learning causal structure from data, also called causal discovery. In particular, we focus on approaches for learning directed acyclic graphs (DAGs) and various generalizations which allow for some variables to be unobserved in the available data. We devote special attention to two fundamental combinatorial aspects of causal structure learning. First, we discuss the structure of the search space over causal graphs. Second, we discuss the structure of equivalence classes over causal graphs, i.e., sets of graphs which represent what can be learned from observational data alone, and how these equivalence classes can be refined by adding interventional data.
translated by 谷歌翻译
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.
translated by 谷歌翻译