The widely used 'Counterfactual' definition of Causal Effects was derived for unbiasedness and accuracy - and not generalizability. We propose a simple definition for the External Validity (EV) of Interventions and Counterfactuals. The definition leads to EV statistics for individual counterfactuals, and to non-parametric effect estimators for sets of counterfactuals (i.e., for samples). We use this new definition to discuss several issues that have baffled the original counterfactual formulation: out-of-sample validity, reliance on independence assumptions or estimation, concurrent estimation of multiple effects and full-models, bias-variance tradeoffs, statistical power, omitted variables, and connections to current predictive and explaining techniques. Methodologically, the definition also allows us to replace the parametric, and generally ill-posed, estimation problems that followed the counterfactual definition by combinatorial enumeration problems in non-experimental samples. We use this framework to generalize popular supervised, explaining, and causal-effect estimators, improving their performance across three dimensions (External Validity, Unconfoundness and Accuracy) and enabling their use in non-i.i.d. samples. We demonstrate gains over the state-of-the-art in out-of-sample prediction, intervention effect prediction and causal effect estimation tasks. The COVID19 pandemic highlighted the need for learning solutions to provide general predictions in small samples - many times with missing variables. We also demonstrate applications in this pressing problem.
translated by 谷歌翻译