Useful resources on quant methods

“Rules that constrain the role of chance in human affairs often generate interesting experiments”

 Angrist & Pischke

Introductory Texts

These are some of the texts that helped me get started with quant methods. I still return to them regularly when I want a refresher. Essential.

Wooldridge, J. M. (2015). Introductory Econometrics: A modern approach. Nelson Education.

Angrist, J. D., & Pischke, J. S. (2014). Mastering ‘metrics: The path from cause to effect. Princeton University Press.

Cook, T. D., Campbell, D. T., & Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.

Imai, K. (2017). Quantitative social science: an introduction. Princeton University Press.


Causal Inference

Rosenbaum, P. R. (2005) Observational study. In Everitt, B., & Howell, D. C. (Eds.).  Encyclopedia of statistics in behavioral science (pp. 1809–1814). Link here.

Imai, K., King, G., & Stuart, E. A. (2008). Misunderstandings between experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society: Series A171(2), 481-502.

Shadish, W. R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods15(1), 3.

Imbens, G. W., & Rubin, D. B. (2015). A Classification of Assignment Mechanisms. In their book Causal inference in statistics, social, and biomedical sciences. Cambridge University Press.

Imbens, G. W., & Rubin, D. B. (2015). A Brief History of the Potential Outcomes Approach to Causal Inference. In their book Causal inference in statistics, social, and biomedical sciences. Cambridge University Press.

Rubin, D. B. (2008). For objective causal inference, design trumps analysis. The Annals of Applied Statistics, 808-840.


Surveys & Tutorials: Statistics

Textbooks are long and expensive. Sometimes you just want a clear overview.

Steve Strand’s open access course on using regression methods in educational research. Link here.

McNeish, D., & Kelley, K. (2018). Fixed effects models versus mixed effects models for clustered data: Reviewing the approaches, disentangling the differences, and making recommendations. Psychological Methods.

Visualising Hierarchical Models. Link here.

Kreuter, F., & Valliant, R. (2007). A survey on survey statistics: What is done and can be done in Stata. Stata Journal7(1), 1.

Waldmann, E. (2018). Quantile regression: A short story on how and why. Statistical Modelling18(3-4), 203-218.

Landau, S. (2002). Using survival analysis in psychology. Understanding Statistics: Statistical Issues in Psychology, Education, and the Social Sciences1(4), 233-270.


Surveys & Tutorials: Evaluation methods

Textbooks are long and expensive. Sometimes you just want a clear overview.

Deaton, A., & Cartwright, N. (2017). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine.

St Clair, T., & Cook, T. D. (2015). Difference-in-differences methods in public finance. National Tax Journal68(2), 319.

Lee, D. S., & Lemieux, T. (2010). Regression discontinuity designs in economics. Journal of Economic Literature48(2), 281-355.

Austin, P. C. (2011). An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behavioral Research46(3), 399-424…

…this survey article is twinned with this case study and tutorial article…

…Austin, P. C. (2011). A tutorial and case study in propensity score analysis: an application to estimating the effect of in-hospital smoking cessation counseling on mortality. Multivariate Behavioral Research46(1), 119-151.


Within-Study Comparisons

There are two ways to think about whether a research design will give you a causal effect. One is to use econometric theory to think about the identifying assumptions. This is currently the dominant approach. The other is to conduct empirical studies evaluating whether particular research designs reproduce the results from randomised controlled trials. The latter approach is now beginning to provide substantive guidance on when observational studies do isolate causal effects. Having a good pretest, for example, is worth its weight in gold; using “focal, local” matches helps; stable pretest trends make the analysts life much easier; and RDDs are in many ways as good as RCTs.

Ferraro, P. J., & Miranda, J. J. (2017). Panel data designs and estimators as substitutes for randomized controlled trials in the evaluation of public programs. Journal of the Association of Environmental and Resource Economists4(1), 281-317.

Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within‐study comparisons. Journal of Policy Analysis and Management27(4), 724-750.

Fortson, K., Gleason, P., Kopa, E., & Verbitsky-Savitz, N. (2015). Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased. Economics of Education Review44, 100-113.

Hallberg, K., Wong, V. C., & Cook, T. D. (2016). Evaluating Methods for Selecting School-Level Comparisons in Quasi-Experimental Designs: Results from a Within-Study Comparison. Link here.

St. Clair, T., Hallberg, K., & Cook, T. D. (2016). The validity and precision of the comparative interrupted time-series design: three within-study comparisons. Journal of Educational and Behavioral Statistics41(3), 269-299.

Hallberg, K., Wing, C., Wong, V., & Cook, T. (2014). Clinical trials and regression discontinuity designs. The Oxford Handbook of Quantitative Methods.


Common Errors and Misinterpretations

Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. European Journal of Epidemiology31(4), 337-350.

Wang, L. L., Watts, A. S., Anderson, R. A., & Little, T. D. (2013). Common fallacies in quantitative research methodology. In Masyn, K. E., Nathan, P., & Little, T. (Eds.). The Oxford Handbook of Quantitative Methods, Vol. 2: Statistical Analysis, 718.

Colquhoun, D. (2014). An investigation of the false discovery rate and the misinterpretation of p-values. Royal Society open science1(3). Link here.

Senn, S. (2013). Seven myths of randomisation in clinical trials. Statistics in Medicine32(9), 1439-1450.

@MartenvSmeden has an epic twitter thread on misconceptions here


Measurement

A distinctive feature of the social sciences is that many variables of interest are not directly observable. Unless you are willing to restrict yourself to only conducting policy evaluations, this necessitates careful engagement with the science of measurement.

Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review111(4), 1061-1071.

Spector, P. E. (2014). Survey design and measure development. The Oxford Handbook of Quantitative Methods, Vol. 1, 170.

Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift’s electric factor analysis machine. Understanding statistics: Statistical issues in psychology, education, and the social sciences2(1), 13-43.

Schmitt, T. A. (2011). Current methodological considerations in exploratory and confirmatory factor analysis. Journal of Psychoeducational Assessment29(4), 304-321.

Sass, D. A., & Schmitt, T. A. (2010). A comparative investigation of rotation criteria within exploratory factor analysis. Multivariate Behavioral Research45(1), 73-103.

Favero, N., & Bullock, J. B. (2014). How (not) to solve the problem: An evaluation of scholarly responses to common source bias. Journal of Public Administration Research and Theory25(1), 285-308.

Putnick, D. L., & Bornstein, M. H. (2016). Measurement invariance conventions and reporting: the state of the art and future directions for psychological research. Developmental Review41, 71-90.

Weidman, A. C., Steckler, C. M., & Tracy, J. L. (2017). The jingle and jangle of emotion assessment: Imprecise measurement, casual scale usage, and conceptual fuzziness in emotion research. Emotion17(2), 267-295.

Cheema, J. R. (2014). A review of missing data handling methods in education research. Review of Educational Research84(4), 487-508.

White, I. R., Royston, P., & Wood, A. M. (2011). Multiple imputation using chained equations: issues and guidance for practice. Statistics in Medicine30(4), 377-399.


The Role of Theory

Theory is essential for any evaluation that is not an RCT because it’s necessary to assess the plausibility of the identifying assumptions. A strong theory of selection can also help model the assignment mechanism properly when using propensity score matching. It’s also important for generalising the findings from RCTs, which generally have weak external validity.

Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2014). Mechanisms and the evidence hierarchy. Topoi33(2), 339-360.

Illari, P. M., & Williamson, J. (2012). What is a mechanism? Thinking about mechanisms across the sciences. European Journal for Philosophy of Science2(1), 119-135.

Cook, T. D. (2014). Generalizing causal knowledge in the policy sciences: External validity as a task of both multiattribute representation and multiattribute extrapolation. Journal of Policy Analysis and Management33(2), 527-536.

Healy, K. (2017). Fuck nuance. Sociological Theory35(2), 118-127.

Van Lange, P. A. (2013). What we should expect from theories in social psychology: Truth, abstraction, progress, and applicability as standards (TAPAS). Personality and Social Psychology Review17(1), 40-55.

Cartwright, N. D. (2013). Evidence: for policy and wheresoever rigor is a must. Link here.


Stats Visualisations

Sampling Distributions. Link here.

Visualising Hierarchical Models. Link here.

Random Assignment. Link here.

Seeing Statistical Theory. Link here.

Leckie, G., Charlton, C., & Goldstein, H. (2016). Communicating uncertainty in school value-added league tables. Centre for Multilevel Modelling, University of Bristol. URL: http://www.cmm.bris.ac.uk/interactive/uncertainty/.

 

Sam Sims Quantitative Education Research