Read Full Paper

Operations management (OM) research is increasingly drawing from a toolbox of econometric methods to test treatment effects. Offering guidance for future studies, HKUST’s Yanzhen Chen and Yatang Lin, together with colleagues, have surveyed the validity of such tools for establishing causality. Their recent review provides a much-needed summary for those wishing to tap into the potential of statistical techniques in empirical OM research.

Difference-in-differences (DID) analysis, instrumental variable (IV) analysis, fixed effects (FE) analysis—these and other staples of econometrics are becoming ever more familiar in the OM literature. What unites this “alphabet soup” of methods is their focus on teasing out cause-and-effect relationships in observed data. This focus, the authors note, “is consistent with the interest in understanding the causal effect of interventions in other disciplines.”

To describe how OM research utilizes these approaches—known as identification strategies—the authors reviewed hundreds of empirical papers published in Production and Operations Management (POM) since 2016. Each approach makes its own underlying assumptions, and researchers must choose carefully to ensure plausible conclusions. Amidst this forest of methodological options, the authors aimed to illuminate “the fundamental problem of causal inference and various types of causal effects that one can estimate.”

OM researchers perform empirical studies to test the effect of an intervention—for example, the adoption of a technology by some firms but not others. Now, scholars will benefit from a table prepared by Chen, Lin, and colleagues summarizing the applicability and limitations of identification strategies. “No one approach is unconditionally superior to another in all circumstances,” they point out, emphasizing the importance of triangulating methods across studies. No single study, however well designed, can prove causality.

Crucially, causal estimation approaches rely on the “potential outcomes” or counterfactual approach, assuming that differences between treatment and control groups are caused solely by the treatment. In this framework, the authors identify “two critical challenges in assessing causality” and analyze how popular identification strategies address them. One challenge is omitted variable bias, which occurs when the treatment group would not have followed the control group had it been untreated. The other is differential treatment, where the groups differ in a way that prevents the generalization of results.

As an example of limitations, classical DID designs rely crucially on the parallel-trend assumption. Meanwhile, IV methods are hampered by the difficulty of finding relevant IVs, large standard errors when using small samples, and the inability to test whether the IVs affect the outcome solely via the treatment.

The authors end with wise remarks on the implications of their survey’s findings. Assessing causality is not the be-all and end-all of research; practitioners should note the importance of other possible contributions, such as theorizing and mechanistic studies. “We remain optimistic that OM researchers will embrace other notions of causality and ways of gaining understanding via careful descriptive research,” the researchers conclude.