因果表示学习是识别基本因果变量及其从高维观察(例如图像)中的关系的任务。最近的工作表明,可以从观测的时间序列中重建因果变量,假设它们之间没有瞬时因果关系。但是,在实际应用中,我们的测量或帧速率可能比许多因果效应要慢。这有效地产生了“瞬时”效果,并使以前的可识别性结果无效。为了解决这个问题,我们提出了ICITRI,这是一种因果表示学习方法,当具有已知干预目标的完美干预措施时,可以在时间序列中处理瞬时效应。 Icitris从时间观察中识别因果因素,同时使用可区分的因果发现方法来学习其因果图。在三个视频数据集的实验中,Icitris准确地识别了因果因素及其因果图。
translated by 谷歌翻译
从视觉观察中了解动态系统的潜在因果因素被认为是对复杂环境中推理的推理的关键步骤。在本文中,我们提出了Citris,这是一种变异自动编码器框架,从图像的时间序列中学习因果表示,其中潜在的因果因素可能已被干预。与最近的文献相反,Citris利用了时间性和观察干预目标,以鉴定标量和多维因果因素,例如3D旋转角度。此外,通过引入归一化流,可以轻松扩展柑橘,以利用和删除已验证的自动编码器获得的删除表示形式。在标量因果因素上扩展了先前的结果,我们在更一般的环境中证明了可识别性,其中仅因果因素的某些成分受干预措施影响。在对3D渲染图像序列的实验中,柑橘类似于恢复基本因果变量的先前方法。此外,使用预验证的自动编码器,Citris甚至可以概括为因果因素的实例化,从而在SIM到现实的概括中开放了未来的研究领域,以进行因果关系学习。
translated by 谷歌翻译
本文提出了在适当的监督信息下进行分解的生成因果代表(亲爱的)学习方法。与实施潜在变量独立性的现有分解方法不同,我们考虑了一种基本利益因素可以因果关系相关的一般情况。我们表明,即使在监督下,先前具有独立先验的方法也无法解散因果关系。在这一发现的激励下,我们提出了一种称为DEAR的新的解开学习方法,该方法可以使因果可控的产生和因果代表学习。这种新公式的关键要素是使用结构性因果模型(SCM)作为双向生成模型的先验分布。然后,使用合适的GAN算法与发电机和编码器共同训练了先验,并与有关地面真相因子及其基本因果结构的监督信息合并。我们提供了有关该方法的可识别性和渐近收敛性的理论理由。我们对合成和真实数据集进行了广泛的实验,以证明DEAR在因果可控生成中的有效性,以及在样本效率和分布鲁棒性方面,学到的表示表示对下游任务的好处。
translated by 谷歌翻译
这项工作介绍了一种新颖的原则,我们通过机制稀疏正规调用解剖学,基于高级概念的动态往往稀疏的想法。我们提出了一种表示学习方法,可以通过同时学习与它们相关的潜在因子和稀疏因果图形模型来引起解剖学。我们开发了一个严谨的可识别性理论,建立在最近的非线性独立分量分析(ICA)结果中,结果是模拟这一原理,并展示了如何恢复潜在变量,如果一个规则大致潜在机制为稀疏,如果某些图形连接标准通过数据生成过程满足。作为我们框架的特殊情况,我们展示了如何利用未知目标的干预措施来解除潜在因子,从而借鉴ICA和因果关系之间的进一步联系。我们还提出了一种基于VAE的方法,其中通过二进制掩码来学习和正规化潜在机制,并通过表明它学会在模拟中的解散表示来验证我们的理论。
translated by 谷歌翻译
我们的目标是恢复时间延迟的潜在因果变量,并确定其与测量的时间数据的关系。由于在最常规情况下潜在的变量并不唯一可恢复,估计来自观察的因果关系差别尤其具有挑战性。在这项工作中,我们考虑潜在过程的非参数,非间断设置和参数设置,并提出了两个可提供的条件,在该可提供条件下,可以从其非线性混合物中识别时间上发生因果潜在过程。我们提出了一系列的理论上接地的架构,通过在原因过程中通过适当的约束来实现我们的条件来扩展变形AutoEncoders(VAES)。各种数据集的实验结果表明,在不同依赖结构下,从观察到的变量可靠地识别了时间的因果关系潜在过程,并且我们的方法显着优于不利用历史记录或非间常信息的基线。这是第一种工作之一,即在不使用稀疏性或最小的假设的情况下成功地从非线性混合物中恢复时间延迟潜在的过程之一。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
学习分离旨在寻找低维表示,该表示由观察数据的多个解释性和生成因素组成。变异自动编码器(VAE)的框架通常用于将独立因素从观察中解散。但是,在实际情况下,具有语义的因素不一定是独立的。取而代之的是,可能存在基本的因果结构,从而使这些因素取决于这些因素。因此,我们提出了一个名为Causalvae的新的基于VAE的框架,该框架包括一个因果层,将独立的外源性因子转化为因果内源性因素,这些因子与数据中的因果关系相关概念相对应。我们进一步分析了模型,表明从观测值中学到的拟议模型可以在一定程度上恢复真实的模型。实验是在各种数据集上进行的,包括合成和真实的基准Celeba。结果表明,因果关系学到的因果表示是可以解释的,并且其因果关系作为定向无环形图(DAG)的因果关系良好地鉴定出来。此外,我们证明了所提出的Causalvae模型能够通过因果因素的“操作”来生成反事实数据。
translated by 谷歌翻译
In this review, we discuss approaches for learning causal structure from data, also called causal discovery. In particular, we focus on approaches for learning directed acyclic graphs (DAGs) and various generalizations which allow for some variables to be unobserved in the available data. We devote special attention to two fundamental combinatorial aspects of causal structure learning. First, we discuss the structure of the search space over causal graphs. Second, we discuss the structure of equivalence classes over causal graphs, i.e., sets of graphs which represent what can be learned from observational data alone, and how these equivalence classes can be refined by adding interventional data.
translated by 谷歌翻译
因果代表学习揭示了低级观察背后的潜在高级因果变量,这对于一组感兴趣的下游任务具有巨大的潜力。尽管如此,从观察到的数据中确定真正的潜在因果表示是一个巨大的挑战。在这项工作中,我们专注于确定潜在的因果变量。为此,我们分析了潜在空间中的三个固有特性,包括传递性,置换和缩放。我们表明,传递性严重阻碍了潜在因果变量的可识别性,而排列和缩放指导指导了识别潜在因果变量的方向。为了打破传递性,我们假设潜在的潜在因果关系是线性高斯模型,其中高斯噪声的权重,平均值和方差受到额外观察到的变量的调节。在这些假设下,我们从理论上表明,潜在因果变量可以识别为微不足道的置换和缩放。基于这个理论结果,我们提出了一种新型方法,称为结构性因果变异自动编码器,该方法直接学习潜在因果变量,以及从潜在因果变量到观察到的映射。关于合成和实际数据的实验结果证明了可识别的结果以及所提出的学习潜在因果变量的能力。
translated by 谷歌翻译
跨学科的一个重要问题是发现产生预期结果的干预措施。当可能的干预空间很大时,需要进行详尽的搜索,需要实验设计策略。在这种情况下,编码变量之间的因果关系以及因此对系统的影响,对于有效地确定理想的干预措施至关重要。我们开发了一种迭代因果方法来识别最佳干预措施,这是通过分布后平均值和所需目标平均值之间的差异来衡量的。我们制定了一种主动学习策略,该策略使用从不同干预措施中获得的样本来更新有关基本因果模型的信念,并确定对最佳干预措施最有用的样本,因此应在下一批中获得。该方法采用了因果模型的贝叶斯更新,并使用精心设计的,有因果关系的收购功能优先考虑干预措施。此采集函数以封闭形式进行评估,从而有效优化。理论上以信息理论界限和可证明的一致性结果在理论上基于理论上的算法。我们说明了综合数据和现实世界生物学数据的方法,即来自worturb-cite-seq实验的基因表达数据,以识别诱导特定细胞态过渡的最佳扰动;与几个基线相比,观察到所提出的因果方法可实现更好的样品效率。在这两种情况下,我们都认为因果知情的采集函数尤其优于现有标准,从而允许使用实验明显更少的最佳干预设计。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
因果推断的一个共同主题是学习观察到的变量(也称为因果发现)之间的因果关系。考虑到大量候选因果图和搜索空间的组合性质,这通常是一项艰巨的任务。也许出于这个原因,到目前为止,大多数研究都集中在相对较小的因果图上,并具有多达数百个节点。但是,诸如生物学之类的领域的最新进展使生成实验数据集,并进行了数千种干预措施,然后进行了数千个变量的丰富分析,从而增加了机会和迫切需要大量因果图模型。在这里,我们介绍了因子定向无环图(F-DAG)的概念,是将搜索空间限制为非线性低级别因果相互作用模型的一种方法。将这种新颖的结构假设与最近的进步相结合,弥合因果发现与连续优化之间的差距,我们在数千个变量上实现了因果发现。此外,作为统计噪声对此估计程序的影响的模型,我们根据随机图研究了F-DAG骨架的边缘扰动模型,并量化了此类扰动对F-DAG等级的影响。该理论分析表明,一组候选F-DAG比整个DAG空间小得多,因此在很难评估基础骨架的高维度中更统计学上的稳定性。我们提出了因子图(DCD-FG)的可区分因果发现,这是对高维介入数据的F-DAG约束因果发现的可扩展实现。 DCD-FG使用高斯非线性低级结构方程模型,并且在模拟中的最新方法以及最新的大型单细胞RNA测序数据集中,与最新方法相比显示出显着改善遗传干预措施。
translated by 谷歌翻译
We explore how observational and interventional causal discovery methods can be combined. A state-of-the-art observational causal discovery algorithm for time series capable of handling latent confounders and contemporaneous effects, called LPCMCI, is extended to profit from casual constraints found through randomized control trials. Numerical results show that, given perfect interventional constraints, the reconstructed structural causal models (SCMs) of the extended LPCMCI allow 84.6% of the time for the optimal prediction of the target variable. The implementation of interventional and observational causal discovery is modular, allowing causal constraints from other sources. The second part of this thesis investigates the question of regret minimizing control by simultaneously learning a causal model and planning actions through the causal model. The idea is that an agent to optimize a measured variable first learns the system's mechanics through observational causal discovery. The agent then intervenes on the most promising variable with randomized values allowing for the exploitation and generation of new interventional data. The agent then uses the interventional data to enhance the causal model further, allowing improved actions the next time. The extended LPCMCI can be favorable compared to the original LPCMCI algorithm. The numerical results show that detecting and using interventional constraints leads to reconstructed SCMs that allow 60.9% of the time for the optimal prediction of the target variable in contrast to the baseline of 53.6% when using the original LPCMCI algorithm. Furthermore, the induced average regret decreases from 1.2 when using the original LPCMCI algorithm to 1.0 when using the extended LPCMCI algorithm with interventional discovery.
translated by 谷歌翻译
当前独立于域的经典计划者需要问题域和实例作为输入的符号模型,从而导致知识采集瓶颈。同时,尽管深度学习在许多领域都取得了重大成功,但知识是在与符号系统(例如计划者)不兼容的亚符号表示中编码的。我们提出了Latplan,这是一种无监督的建筑,结合了深度学习和经典计划。只有一组未标记的图像对,显示了环境中允许的过渡子集(训练输入),Latplan学习了环境的完整命题PDDL动作模型。稍后,当给出代表初始状态和目标状态(计划输入)的一对图像时,Latplan在符号潜在空间中找到了目标状态的计划,并返回可视化的计划执行。我们使用6个计划域的基于图像的版本来评估LATPLAN:8个插头,15个式嘴,Blockworld,Sokoban和两个LightsOut的变体。
translated by 谷歌翻译
The framework of variational autoencoders allows us to efficiently learn deep latent-variable models, such that the model's marginal distribution over observed variables fits the data. Often, we're interested in going a step further, and want to approximate the true joint distribution over observed and latent variables, including the true prior and posterior distributions over latent variables. This is known to be generally impossible due to unidentifiability of the model. We address this issue by showing that for a broad family of deep latentvariable models, identification of the true joint distribution over observed and latent variables is actually possible up to very simple transformations, thus achieving a principled and powerful form of disentanglement. Our result requires a factorized prior distribution over the latent variables that is conditioned on an additionally observed variable, such as a class label or almost any other observation. We build on recent developments in nonlinear ICA, which we extend to the case with noisy or undercomplete observations, integrated in a maximum likelihood framework. The result also trivially contains identifiable flow-based generative models as a special case.
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
Linear structural causal models (SCMs)-- in which each observed variable is generated by a subset of the other observed variables as well as a subset of the exogenous sources-- are pervasive in causal inference and casual discovery. However, for the task of causal discovery, existing work almost exclusively focus on the submodel where each observed variable is associated with a distinct source with non-zero variance. This results in the restriction that no observed variable can deterministically depend on other observed variables or latent confounders. In this paper, we extend the results on structure learning by focusing on a subclass of linear SCMs which do not have this property, i.e., models in which observed variables can be causally affected by any subset of the sources, and are allowed to be a deterministic function of other observed variables or latent confounders. This allows for a more realistic modeling of influence or information propagation in systems. We focus on the task of causal discovery form observational data generated from a member of this subclass. We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure. To the best of our knowledge, this is the first work that gives identifiability results for causal discovery under both latent confounding and deterministic relationships. Further, we propose an algorithm for recovering the underlying causal structure when the aforementioned conditions are satisfied. We validate our theoretical results both on synthetic and real datasets.
translated by 谷歌翻译
结构方程模型(SEM)是一种有效的框架,其原因是通过定向非循环图(DAG)表示的因果关系。最近的进步使得能够从观察数据中实现了DAG的最大似然点估计。然而,在实际场景中,可以不能准确地捕获在推断下面的底层图中的不确定性,其中真正的DAG是不可识别的并且/或观察到的数据集是有限的。我们提出了贝叶斯因果发现网(BCD网),一个变分推理框架,用于估算表征线性高斯SEM的DAG的分布。由于图形的离散和组合性质,开发一个完整的贝叶斯后面是挑战。我们通过表达变分别家庭分析可扩展VI的可扩展VI的关键设计选择,例如1)表达性变分别家庭,2)连续弛豫,使低方差随机优化和3)在潜在变量上具有合适的前置。我们提供了一系列关于实际和合成数据的实验,显示BCD网在低数据制度中的标准因果发现度量上的最大似然方法,例如结构汉明距离。
translated by 谷歌翻译
因果结构学习是许多领域的关键问题。通过对感兴趣系统进行实验来学习因果结构。我们解决了设计一批实验的主要原因,每个实验中同时干预多个变量。虽然可能比常用的单变干预措施更具信息丰富,但选择这种干预措施是更具挑战性的,这是由于复合干预措施的双指数组合搜索空间。在本文中,我们开发有效的算法,以优化量化预算限制批次实验的信息性的不同目标函数。通过建立这些目标的新型子模具性质,我们为我们的算法提供近似保证。我们的算法经验上优于随机干预和算法,只能选择单变化干预。
translated by 谷歌翻译
We consider the problem of recovering the causal structure underlying observations from different experimental conditions when the targets of the interventions in each experiment are unknown. We assume a linear structural causal model with additive Gaussian noise and consider interventions that perturb their targets while maintaining the causal relationships in the system. Different models may entail the same distributions, offering competing causal explanations for the given observations. We fully characterize this equivalence class and offer identifiability results, which we use to derive a greedy algorithm called GnIES to recover the equivalence class of the data-generating model without knowledge of the intervention targets. In addition, we develop a novel procedure to generate semi-synthetic data sets with known causal ground truth but distributions closely resembling those of a real data set of choice. We leverage this procedure and evaluate the performance of GnIES on synthetic, real, and semi-synthetic data sets. Despite the strong Gaussian distributional assumption, GnIES is robust to an array of model violations and competitive in recovering the causal structure in small- to large-sample settings. We provide, in the Python packages "gnies" and "sempler", implementations of GnIES and our semi-synthetic data generation procedure.
translated by 谷歌翻译