Qualitative Methods Research
This strand of research examines avenues of productive integration between qualitative methods and each of statistical methods and game theory, as well as explicating the assumptions need to infer causal relationships using qualitative methods.
Abstract: Proponents of set-theoretic comparative methods (STCM) sharply differentiate their approach from quantitative analysis—unlike many researchers who focus on integrating qualitative and quantitative methods. This article engages these opposing views by demonstrating shared foundations between STCM and quantitative techniques. First, it shows how the quantitative practice of analyzing cases that exhibit variation on both the explanatory conditions and the outcome—for example, all four cells of a 2×2 table—guards against misleading conclusions about necessary/sufficient conditions. Hence, conventional statistical ideas about association are relevant for STCM. Second, STCM’s tools for analyzing causal complexity share important features with regression interaction terms. Third, scrutinizing these shared foundations suggests how stronger theoretical and empirical standards for causal inference with deterministic hypotheses can be established. Focusing on shared foundations and recognizing that STCM does not genuinely break new inferential ground facilitate new opportunities for strengthening comparative research tools, rather than unproductively overemphasizing differences from mainstream methods.
Still Searching for the Value-Added:
Persistent Concerns About Set-Theoretic Comparative Methods
Comparative Political Studies, 2016, 49(6): 793-800.
Abstract: What novel leverage for understanding the social world do set-theoretic comparative methods (STCM) offer? I have argued that although important methodological ideas underlie this method, many STCM techniques converge with existing quantitative tools of statistical modeling. Unfortunately, STCM scholars have often obscured this crucial point by instead emphasizing stark differences from quantitative tools. Here, I further develop two key arguments about these convergences by addressing the astute commentaries offered by Thiem et al. and Schneider: (a) Regarding necessary and sufficient conditions, STCM’s procedures for incorporating cases from a 2×2 table may yield erroneous conclusions that can easily be avoided by using more conventional techniques. (b) Regarding causal complexity, STCM and statistical interaction terms often provide the same information. These arguments demonstrate that STCM scholars have yet to establish distinctive advantages of their methods over statistical modeling. Furthermore, alternative qualitative tools offer considerably more promise than does STCM.
Journal of Theoretical Politics, 2017, 29(3): 467–491
Abstract: Political scientists frequently use qualitative evidence to support or evaluate the empirical applicability of formal models. Despite this widespread practice, neither the qualitative methods literature nor research on empirically evaluating formal models systematically address the topic. This article makes three contributions to bridge this gap. First, it demonstrates that formal models and qualitative evidence are indeed frequently combined in current research. Second, it shows how process tracing can be as important a tool for empirically assessing models as statistical testing, because models and process tracing share a common focus on understanding causal mechanisms. Lastly, it provides new guidelines for using process tracing that focus on issues specific to the modeling enterprise, illustrated with examples from recent research.
Abstract: Although process tracing is a prominent qualitative method, considerable disagreement exists over how process tracing contributes to causal inference. This paper integrates the process tracing "hoop test" with the potential outcomes framework and introduces formal standards for conducting convincing hoop tests. The potential outcomes language clarifies how assumptions about the causal process—the core component of hoop tests, and conceptualized here as causal process assumptions (CPAs)—can inform unobserved potential outcomes. Specifically, when elements of the causal process are invariant, hoop tests can yield convincing causal inferences. Counterfactual thought experiments provide a useful standard for assessing whether a CPA is invariant. An example from Brady (2004) demonstrates how hoop tests can produce substantively informative inferences.