ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

A Phantom Menace: Random Data, Model Specification and Causal Inference in Qualitative Comparative Analysis

Political Methodology
Methods
Qualitative Comparative Analysis
Lusine Mkrtchyan
University of Lucerne
Lusine Mkrtchyan
University of Lucerne
Alrik Thiem
University of Lucerne

Abstract

To date, the method of Qualitative Comparative Analysis (QCA) has been employed by hundreds of researchers. At the same time, the literature has long been convinced that QCA is prone to committing causal fallacies when confronted with random data. Specifically, beyond a certain case to-factor ratio, QCA is believed not to be able to distinguish anymore between random and real data. In consequence, applied researchers relying on QCA for the analysis of their empirical data have worried that the explanatory models presented to them would be nothing but algorithmic artifacts. So as to minimize that risk, benchmark tables of boundary case-to-factor ratios have been proposed by Marx and Dusa (2011). We argue in this article that fears of inferential breakdown in QCA are unfounded as every set of data generated by any proper causal structure can be duplicated by an isomorphic set of purely stochastic data. In this connection, we furthermore demonstrate that Marx and Dusa's benchmarks do not prevent but force QCA to commit causal fallacies. Ultimately, we maintain that random data are a phantom menace which applied researchers need not worry about when designing their analyses with QCA.