Duncan Green’s FP2P blog recently featured a self-described rant about the disconnect between academic debates on aid and actual aid practice. Judging by the number of comments and twitter responses, by practitioners but mostly by academics, you could say he has hit a nerve in our little development studies community. Many of my academic colleagues and friends were disappointed with Duncan’s apparent simplification and stereotyping of development scholarship. I have a slightly different take, based on my personal experience. Why does my personal experience matter at all? Well, I did get a PhD in an American political science department (as academic as it gets), then for five years I worked at a DFID-funded research centre in a UK development studies department (meant to influence policy), and then over the last two years I have been working as an aid practitioner. And my sense is that while Duncan’s rant is justified, the apportioning of blame needs to be much more nuanced.
Not all development research is equal
To make my case I really need to start from fundamentals, which in this case is a basic categorization of development research. I think it’s fair to say that academic work on development – ideally – has three symultaneous aspirations: methodological rigor, theoretical significance, and practical relevance. The Holy Trinity of devstud. The ESRC proposal trifecta. But while that may be a common aspiration, it is also fair to say that very little research achieves all three goals, for various reasons. In this handy Venn diagram I distinguish between seven types of research.
Type 1 research privileges methodological rigour above everything else. This, I would say, can be common pathology of mainstream political science and economics: focusing on maximizing causal inference based on the data available. This tends to be quantitative work with little by way of real-world applicability or analytical substance. (We just had an interesting Twitter dicussion prompted by concerns shared by Alice Evans, Hailey Swedlund, and others on whether the job market privileges Type 1 – do check it out). Type 2 research, in turn, favors theoretical significance over rigor, exploring questions that are harder to anwser but perhaps more interesting. I would say development studies – with its concern from criticism and social justice – sometimes defaults to type 2. Now, practitioners also have sins of their own. Type 3 research sticks to practical implications at the cost of intellectual depth and encompasses a lot of policy work that comes up with recommendations based on the most cursory of literature reviews (if any!) and superficial or naive analytical frameworks. Bad PEAs, for instance, are emblematic of type 3 research.
Now let’s move on to the hybrids.
Type A is the academic political science ideal, in my view: good theory and good methods. A lot of my friends and colleagues in polisci do precisely this kind of work, which is where real contributions can be found. It is a bit distant from actual policy significance, or at least it’s relevant only in a very general sense. It is hard to do, too – I was never very good at it myself. Type B is the public administration/policy science ideal: using solid empirical methods to address real-world problems. Of course, it tends not to achieve the kind of critical outlook or theoretical sophistication that one can find in the more academic disciplines. It can also be iffy on causality, telling us what tends to work but not why. Lastly, Type C is what I think development studies is best suited for: a combination of theory and real-world significance that may not be fit for polisci journals but that can still prompt reflection by the development community about its goals and methods. The best DFID-funded work tends to fall under this category (though a lot of other DFID-funded work gravitates towards Type 2 critical research). This is basically what I have aspired to myself.
The last hybrid is the ideal type, an so let’s call it Type 🙂.
The perils of conflation
Painting with broad strokes is often risky, however many cliks it can generate. However, bridge-builder that I am (pontifex?), I think there’s a way in which Duncan Green’s rant and the outraged responses of some development academics may be both right.
I would like to think that what the infamous FP2P is criticizing is the kind of Type 2 research that sometimes proliferates in development studies journals and conferences: full of criticism and literature, short on generalizability or linkages to policy practice. Self-rightenousness shrouded in obscurantism. If so, I think that’s a legitimate complaint – I share it myself, and have written about this precise topic before. In the interest of fairness, I think we could also lump Type 1 research into this category of “uselessness”: no less self-righteous or obscure, just praying to a different deity.
Knowing a few of the people who have reacted negatively to the post, I think their reaction may be due to the fact that they categorically do not engage in Type 2 research at all. The best development studies scholars strive for Type 🙂 and at worst they fall into Type A (rigorous and theoretical) or Type C (relevant and theoretical). Conflating Type 2 with Type A minimizes the effort to do good social science with development issues. Conflating Type 2 with Type C undervalues the attempts to tackle real-world problems with a modicum of conceptual and theoretical substance.
A self-inflicted condition?
We all know that academia, in the current environment, tends towards hyperspecialization, and that there a more rewards (grants, jobs, promotions) for getting a paper published in a paywalled journal than for re-shaping the mind of practitioners – even if ideally UK development studies departments want both, ask around to see who gets promoted faster…
Right there we can find a reason for the perceived scarcity of academic research with practical relevance.
What Duncan Green’s post failed to notice is that the aid sector/industry/community makes it incredibily hard to do rigorous research on issues of development practice. Our incentives as practitioners are seldom to open doors and speak honestly to outsiders – when we do, it is often anonymously. Faced with this barrier of silence, a lot of aid research has to rely on more or less inconsequential datasets that are publicly available (like at the OECD-DAC) or on the kindness of individual practitioners who want to share “the real story”. That can very easily push academics interested in aid issues into Type 1 research – reliably but probably invalid – or Type 2 research – valid but unreliable.
The political, organizational, and professional incentives and culture of the aid community makes it really hard to do Type C research, much less Type :-). From that perspective, the books by Hailey Swedlund, Dan Honig, or Matt Andrews are true achievements, the best that academics can do given the available data.
So that’s my point of disagreement with the FP2P post: it is fair to accuse some academic researchers of irrelevance, but only if we also accuse the aid industry of calculated inaccessibility.