Sample selection bias in evaluation of prediction performance of causal models

James P. Long, Min Jin Ha

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Causal models are notoriously difficult to validate because they make untestable assumptions regarding confounding. New scientific experiments offer the possibility of evaluating causal models using prediction performance. Prediction performance measures are typically robust to violations in causal assumptions. However, prediction performance does depend on the selection of training and test sets. Biased training sets can lead to optimistic assessments of model performance. In this work, we revisit the prediction performance of several recently proposed causal models tested on a genetic perturbation data set of Kemmeren. We find that sample selection bias is likely a key driver of model performance. We propose using a less-biased evaluation set for assessing prediction performance and compare models on this new set. In this setting, the causal models have similar or worse performance compared to standard association-based estimators such as Lasso. Finally, we compare the performance of causal estimators in simulation studies that reproduce the Kemmeren structure of genetic knockout experiments but without any sample selection bias. These results provide an improved understanding of the performance of several causal models and offer guidance on how future studies should use Kemmeren.

Original languageEnglish (US)
Pages (from-to)5-14
Number of pages10
JournalStatistical Analysis and Data Mining
Volume15
Issue number1
DOIs
StatePublished - Feb 2022

ASJC Scopus subject areas

  • Analysis
  • Information Systems
  • Computer Science Applications

MD Anderson CCSG core facilities

  • Biostatistics Resource Group

Fingerprint

Dive into the research topics of 'Sample selection bias in evaluation of prediction performance of causal models'. Together they form a unique fingerprint.

Cite this