WHAT’S THE POINT OF THEORY IN APPLIED SOCIAL SCIENCE?

Applied quantitative researchers use statistical methods to answer questions with direct practical applications. Generally this involves trying to isolate causal relationships so that policy makers or practitioners can be provided with reliable advice about how to achieve their goals. Labour economists, for example, study which training schemes increase employment. Education researchers evaluate the effect of different teacher training programmes on teacher retention. And criminologists try to identify the how different police patrol patterns affect the crime rate. Applied research tends to attract pragmatic, empirically-minded people.

It is perhaps not surprising then that applied researchers generally have little time for theory. In my experience, they tend to see theory as impenetrable, untestable and unnecessary. In the first half of this blog I explain each of these objections; in the second half I argue that each of them is mistaken. My aim is to persuade applied quantitative researchers that they will make more progress with their research, and have more impact, if they made more use of theory in their work.

The first charge levelled against theory is that it is impenetrable. It is easy to underestimate how recently we have developed the statistical techniques, hardware, software and datasets necessary for doing applied quantitative research. Prior to this, policy researchers generally filled this empirical vacuum with theory and, as Noah Smith has pointed out, where theory is the only game in town, the competition for publication spots tends to become a battle for who can generate the most sophisticated ideas. This leads to a proliferation of theories in social science that are variously too complex to guide research design, so expansive as to make data collection prohibitively costly, or so nuanced as to make falsification impossible. Many applied researchers conclude that engaging with this sort of theory simply isn’t worth the hassle.

Even where falsification is possible however, many empirically-minded researchers see testing theory as a fool’s errand. Many assume that the best case scenario is that a plausibly causal relationship inconsistent with a theory is identified, sometimes loosely referred to as falsification. But ex-ante the researcher faces a substantial risk of failing to falsify the theory, which is even less valuable. The risks involved in this sort of research therefore dilute the incentives for testing theory.

Finally, even if an applied researcher found a suitable theory and were in principle willing to take the risk of trying to falsify it, many would still argue that testing theoretical relationships is unnecessary. Why not just conduct evaluations of existing policies or programmes that provide results about causal relationships which are of direct interest to policy makers? Ultimately, testing theories always seems one step removed from the most pressing problems facing applied researchers: does it work?

Though each of these arguments contain a grain of truth, they are all wrong in important ways.

The tide is now turning on overly complicated theory in the social sciences. Behavioural economics has persuaded many more economists to study simple heuristics rather than mathematically cumbersome models of sub-rational decision making. In political economy, to take another example, Dani Rodrick and Sharun Mukand have recently developed a theory that can explain something as complex as why liberal democracy does or does not emerge using only six basic concepts. But perhaps the best illustration of the trend towards simpler theory comes from sociology. The American Sociological Association recently held a meeting to debate whether the discipline needs to make its theory more manageable. One of the papers presented at that meeting demonstrates how opinion is changing in the field with its blunt title: Fuck Nuance. It is a brilliant argument for keeping theory simple enough to be useful. Paul Van Lange from Oxford University has also developed a set of useful principles (Truth, Abstraction, Progress and Applicability, or TAPAS) that can help researchers identify and develop good theories.

Recent developments have also helped to make theory more testable. In a brilliant paper, the political scientist Kevin Clarke recently showed that the only way to really provide confirmation for a theory is to test it against other competing theories. Since then, two other political scientists, Kosuke Imai and Dustin Tingley, have shown how finite mixture models can be used to do just this. They have also developed a programme for the statistical software R, making it straightforward for other researchers to test which theories apply best and, crucially, under which conditions. This approach also avoids the worst of the incentive problems associated with attempts at falsification.

Theory is also necessary in that it provides knowledge essential to answering the does it work question which statistics alone cannot. Nancy Cartwright discusses a now well-known case which highlights this. The Tamil Integrated Nutrition Policy (TINP) involved provision of healthcare and feeding advice for mothers of newborns and was shown to be effective in reducing child malnutrition. This is useful statistical evidence. However, when a near identical project was implemented in Bangladesh it was shown not to have an effect. But then further research found that educating the mothers of Bangledeshi children was ineffective because important aspects of food preparation there are not generally conducted by the mother. In Angus Deaton’s terms, while the intervention was replicated, the mechanism underlying success was not. Understanding the theory behind a programme can therefore help clarify its value in a way that statistics alone cannot.

In summary, theory is becoming steadily less impenetrable, increasingly easy to test and is necessary for applied researchers to confidently infer policy advice from specific evaluations. Instead of focusing solely on evaluating existing programmes, applied researchers would likely have more impact if they collectively adapted a more cyclical approach in which they: evaluated existing programmes to identify which were most effective; tested known theories or mechanism which may underpin effective programmes; helped design new programmes based on the most successful theories; and then conducted further evaluations of the new interventions. This approach would contribute to a virtuous cycle of policy-relevant discoveries which could allow quantitative researchers to deliver on Pawson and Tilley’s ideal of finding out “what works for whom in what circumstance… and how.”

Author: samsims1

Education researcher interested in how we can build teaching capability in schools. I do my research at UCL Institute of Education and the Education Datalab.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s