PEA Confessions, part I: Mainstreaming woes

Four years ago I published a research paper and policy briefing at ESID that focused on the barriers to political-economy analysis (PEA) in donor agencies. I thought our research gave me a pretty good grasp of the promises and pitfalls of PEA in the aid community. After two-and-a-half years of working as a PEA consultant, the time has come for some self-imposed accountability. This is part I of a new series of posts dramatically called “PEA Confessions”.

I want to begin with ESID Briefing Paper 5: “Mainstreaming political economy analysis (PEA) in donor agencies”. It is not my most inspired writing, but at the time it felt like a very clever contribution. Having found – with David Hulme – how organizational dynamics made the use of political analysis by DFID and the World Bank very inconsistent, I thought I needed to devote some thinking to the “so what” question and come up with some semi-coherent recommendations.

The basics of PEA in aid

The first two of our “Key findings” still ring true:

The impact of political economy analysis on aid is mediated by the internal administrative politics of donor agencies. — So true. And applicable to private providers and NGOs, in my limited experience. Even when aid organizations commit to the principle of politically-smart aid, there is still a fundamental organizational problem of who should commission PEA, when, and subject to what kind of quality control. All too often political analysis remains a desirable high-level goal for leaders, a core competency for “governance” types, and a nuisance or transaction cost for non-governance types.

Greater impact in the future depends on the ability and willingness of PEA proponents to promote organisational change. — Also true. Aid organizations – whether agencies, departments, or even programme teams – do not organically gravitate towards politically-smart programming. That means someone must advocate and lobby for PEA to be taken seriously.

What can be done?

The meat and potatoes of the briefing came in a discussion of “trade-offs” between different strategies for mainstreaming PEA, which I simplified into three broad categories: professionalization, programming requirement, and management. The trade-offs in question where three: the ability to control how PEA is done, the level of sophistication that PEA has, and the actual impact it achieves. We even included a neat little table to highlight these:

I honestly think this table is still basically right.

In terms of professionalization, it would be fair to say that the PEA/TWP conversation has evolved markedly, and that this is reflected in ever more sophisticated discussions about concepts, approaches, and tools within the governance/politics community. But this internal evolution has not been accompanied by an external revolution: PEA/TWP, as I write in my book, are still largely the purview of that limited professional community. As much as one could argue that Thinking and Working Politically is intrinsic to Adaptive Development and learning approaches, that has not led to a migration of sophistication from the former to the latter.

How about programming requirements? I can only speak from my experience working in a DFID project. Our quarterly reports do include PEA sections, and DFID has required various PEA reports. The impact of these PEA reports is moderate to significant. The inception-phase PEA actually reshaped part of our approach, which has had lasting consequences for the programme. And evaluators do look at our PEA submissions, and even ask whether we can prove the PEA has impacted practice. But no-one in DFID has really policed the quality of the PEAs I have done, except at the level of empirical detail (of which, admittedly, local offices tend to have tons). I have been left to my own devices to come up with, revise, and implement an analytical framework (which I may share at some other time), with no peer review. This makes it effectively “Pablo’s PEA” – it just so happens that I am a political scientist who is familiar with PEA/TWP, so I do attempt to police myself quite a bit.

Ultimately, internalizing PEA in our everyday work has been a challenging task, and I have had to work very closely with team leaders and programme managers to get as far as we have. Without managers who remind technical staff to use PEA tools, who ask the entire team to provide comments on the PEA sections of the quarterly reports, or who go so far as to schedule monthly PEA meetings, our political analysis would have been constrained strictly to what was required by DFID, which means a couple of long reports plus cursory quarterly updates. So that leads me to think that management remains the most crucial avenue for mainstreaming PEA in implementation, whatever the level of sophistication.

What I got wrong

Four years ago I was pretty much a “PEA brat”, outspoken and annoying, but without much real-world experience. In the time since I have worked as the PEA lead for a DFID programme, helped colleagues in an INGO strengthen their PEA work, and made some modest contributions to the UK-based PEA/TWP/governance community. I’ve gained enough XP to level up a few times, and with experience comes wisdom, and a bit of soul-searching.

In hindsight, the most glaring omissions in my 2014 ESID Briefing Paper is the tapestry of organizational relationships that undergirds an aid project (which I do cover briefly in Why We Lie About Aid, chapter 7). At the time, our research was concerned largely with governance professionals in the HQ and field offices of two major donors. We did not account for the very simple fact that implementation of DFID projects, for instance, is contracted out to providers, who then hire advisers to do the technical bits of work and establish relationships with local and international partners.

In other words, DFID does not “do” PEA. In our anti-corruption programme in Ghana, PEA is carried out by an international consultant (yours truly) supported by a local PEA officer, under the supervision of a team leader, all of us hired by a private provider, which is contractually accountable to DFID. And while all of us can get along and agree on how much PEA to do, and how to make it impactful, there is still so much that can go wrong due to the reliance on personality and individual quirks, on top of sometimes contradictory organizational incentives.

I could have an interest in PEA because it’s my added value as a professional; my bosses could have an interest because DFID “wants PEA”; the local DFID advisor could have an interest because PEA can be used as leverage, whether in the office, with partners, or back home in HQ. Truthfully, I have not encountered such mundane and instrumental reasons for conducting political analysis within our programme. In fact, everybody (by now, even technical advisors) are committed to being politically-smart – yes, we are that lucky. But I wonder what tiny changes here and there would, in a cosmos made of infinite parallel universes, shift our programme PEA ever so slightly from central to peripheral, to cursory, to non-existent.

I did not anticipate that level of complexity back in 2014, and that limited how far my recommendations could travel.

But still, the key findings remain: PEA needs to be advocated and nurtured for it to truly impact design and implementation. To which I would add: a dedicated PEA advisor can be useful throughout the life of a project, so long as programme management, the provider, and the DFID supervisory team all support PEA in principle and in practice. Absent that kind of complicity, don’t bother: save the money for something useful, and instead just try to hire smart advisers who know how to keep an ear to the ground.

And remember…