Methods Matter: p-Hacking and Publication Bias in Causal Analysis in Economics
The credibility revolution in economics has promoted causal identification using randomized control trials (RCT), difference-in-differences (DID), instrumental variables (IV) and regression discontinuity design (RDD). Applying multiple approaches to over 21,000 hypothesis tests published in 25 leading economics journals, we find that the extent of p-hacking and publication bias varies greatly by method. IV (and to a lesser extent DID) are particularly problematic. We find no evidence that (i) papers published in the Top 5 journals are different to others; (ii) the journal “revise and resubmit” process mitigates the problem; (iii) things are improving through time. (JEL A14, C12, C52)
Bartik Instruments: What, When, Why, and How
Paul Goldsmith-Pinkham, Isaac Sorkin, Henry Swift · American Economic Review
Two-Way Fixed Effects Estimators with Heterogeneous Treatment Effects
Clément de Chaisemartin, Xavier D’Haultfœuille · American Economic Review
Overreaction in Macroeconomic Expectations
Pedro Bordalo, Nicola Gennaioli, Yueran Ma, Andrei Shleifer · American Economic Review
Cities in Bad Shape: Urban Geometry in India
Mariaflavia Harari · American Economic Review
Valid <i>t</i>-ratio Inference for IV
David S. Lee, Justin McCrary, Marcelo J. Moreira, Jack Porter · American Economic Review
Social Media, News Consumption, and Polarization: Evidence from a Field Experiment
Roee Levy · American Economic Review
Social Media and Mental Health
Luca Braghieri, Roee Levy, Alexey Makarin · American Economic Review
Methodological Variation in Empirical Corporate Finance
Todd Mitton · The Review of Financial Studies