Why is researching donors so hard?

I have just finished supervising a master’s thesis at the University of Manchester which compares the extent of internal reforms in DFID and the Spanish Development Co-operation Agency, AECID. The student – an AECID civil servant – wanted to know why her organization failed so miserably to reform internally in accordance with its international commitments to principles of aid effectiveness, in particular when one could find in the European vicinity other agencies like DFID that did so much better on all indicators. What emerges is a story of politics and institutional infighting: a socialist prime minister who symbolically boosted the foreign aid budget through the roof without actually planning how it would be managed, a tenuous balance of institutional power where only a few key individuals tried to make a semi-autonomous agency work, and a cadre of jaded civil servants headed by political appointees for whom development co-operation was often a stepping stone in their political careers. This is as much as I can spoil right now; for the whole picture you will have to wait a while.

My work as supervisor for this thesis has been particularly easy, as I myself have been researching the organizational sociology and political economy of donor agencies for a while [our paper on DFID and the World Bank was just published by World Development]. It was also fascinating, as I have been watching AECID from afar over the years, wondering what the hell was going on inside of it. But what was most curious about the exercise was how it reaffirmed my belief that it is really hard it is to do proper research into how aid donors work. The obstacles are well known to researchers in this sub-sub-field:

  • Official planning documents are usually political signaling devices (when I told a senior diplomat years ago that I had read his agency’s country strategy he said “You must be one of the few that did!”).
  • Project and program documents rarely capture the activities that actually went into designing them, but instead create a fiction of logic and rationality.
  • Results tend to be misleading, as their definitions are usually massaged to comply with the counter-bureaucracy back home, and sometimes do not even match the initial goals.
  • Evaluation reports tend to be sanitized, obscuring major infighting or political controversy through quick references to “staff turnover” or “stakeholder consultation”.
  • Insiders have many axes to grind, and you always have to take their word with a grain of salt, as there tend to be hidden agendas for getting “the truth” out.
  • The people who are most instrumental in making policy choices are the least likely to talk to you.
  • To access much of the actual documentation and many key informants you simply need to be an insider, like my student is.

Some could argue that these are exactly the same challenges that researchers find when delving into any kind of formal organization, from corporations to social movements. But I wonder whether there are other factors at play, features of the aid system that make it more prone to obfuscation. Here are some potential causes that I have encountered:

  1. Foreign aid budgets are under almost constant attack, but there is no domestic constituency for aid other than NGOs; this creates a siege mentality with very clearly delineated boundaries between insiders and outsiders.
  2. The need to demonstrate results is as overwhelming as the external assessment of those results is negligent: few outsiders have the time or skills to analyze the veracity of reports, and so there is little incentive to make those reports anything other than moderately successful and mildly optimistic.
  3. Demands for transparency in government decisions and spending makes it harder to live in a world of nuance and complexity, and so aid practitioners pretend to be engineers when in fact they are closer to activists.
  4. A lot of the research and thinking is outsourced to a relatively small number of trusted consultants, who have an incentive to publish reasonably skeptical but broadly supportive pieces in order to ensure their continued status as insiders.
  5. Reports and evaluations that are critical or controversial tend to remain in a perennial limbo inhabited by “consultative drafts” and other not-quite-finished products which conveniently fall outside of legal transparency requirements.
  6. Aid professionals tend to move from post to post at a relatively quick pace, which means there is little institutional memory within particular units or organizations, making them prone to committing and hiding the same sins over and over again.
  7. Because people rotate so quickly, the professional system works on the basis of personal reputation, and asking difficult questions or exposing shoddy policies is not the mark of “a good professional”.

These are just some points based on my own work, and which hopefully I will elaborate further in a manuscript at some point in the not too distant future…

In the meantime, I look forward to more research projects opening the black box of donors and explaining to regular people how aid really works.