PEA Confessions, part II: Report rapport

I have written things you wouldn’t believe. Country assessment frameworks for social accountability organizations. I watched donors try to coordinate in a small Central American country. All those reports will be lost in time, like tears in rain.

It’s Part Deux of PEA confessions! This time I want to discuss one of my favourite pet peeves: PEA reports. In this case I will refer back to some of the themes covered in Why We Lie About Aid, and in particular to a 2015 ESID briefing that I wrote: “Making political analysis useful: Adjusting and scaling”. Again, the goal here is to see if prior insights hold true in light of more practical experience as a PEA consultant.

The smartest people in the room

Let’s start with a basic hypothesis: the effort that PEA proponents have invested over the years in developing PEA frameworks of various stripes is not directly proportional to the impact of political-economy analyses in practice. What a sad hypothesis, right? But one that my research and experience appears to confirm.

The potential impact of a PEA study is mediated by at least three factors: analytical foundation, empirical rigour, and uptake by intended recipients.

Over the last 20 years there’s been a good deal of effort to enhance the analytical sophistication of political analysis in the aid community. Drivers of Change in DFID, SGACA in the Netherlands, Power Analysis in SIDA, Problem-driven PEA in the World Bank are just a few of the highest-profile examples of frameworks for political analysis. The community may not have settled on a particular model, but over the past decade we have seen an intense process of diffusion of PEA. However, while the intellectual agenda behind PEA can be counted as a success, the actual application of these sophisticated frameworks has lagged behing. In my  – admittedly limited – experience, instead of a push towards coherence, what I have encounterd is a haphazard approach to commissioning PEA, a let-a-thousand-flowers-bloom mentality that relies heavily on the PEA consultant’s own biases and inclinations.

The fact is most “PEA consultants” out there are probably not “PEA experts” so much as sector specialists who can look at things with a political economy angle. In the PEAs that I have encountered, concepts and analytical frameworks take a few pages at best, with the majority of the document devoted to empirical material.

You might think that this is fine, because at the end of the day donors care about reality, not theory! But that is a tricky path to tread, for two reasons. First, how can you tell what is relevant and what is not? That’s what concepts and theory are for: they guide our understanding of the world, discriminating between what’s central and what’s peripheral for our particular analytical goals. Second, an overreliance on empirical material brings up challenges of validity and reliability of the study: how sure are we that our data accurately reflects reality, and how confident are we that a different analyst would could to the same conclusion. That is why a sound methodology and peer review are so important. Unfortunately, both elements also tend to be absent from PEA reports.

That’s two hurdles already: Is your framework analytically sound? Is your empirical work valid and reliable? Even if you manage to overcome these hurdles, there is still the question of intended recipients. And that’s where things get hairy.

A tree falls in a forest and no one is around to hear it…

Political-economy analysis has many purposes. In my 2015 briefing I outlined three such purposes: agenda setting, problem solving, and influencing. I think it’s safe to say that most PEA reports are supposed to help with agenda setting: identifying key barriers, risks and opportunities, potential partners and allies, and so on. PEAs are often commissioned to delineate the boundaries between the desirable and the unfeasible.

Traditionally, these types of PEA have been accused of being “too academic” to be of any relevance. That was one of the criticisms leveled against the Drivers of Change reports fifteen years ago: they focused on what not to do, instead of helping to identify what could be changed (which is a sweet irony, given the name of the framework!). They were not action-oriented, but in-depth reflection pieces.

But beyond the “so what” challenge, I have always have a nagging concern about agenda-setting PEA reports: do people actually read them?

In my current role in Ghana, over the past 2.5 years I have written or co-written three major PEA reports, and a bevy of quarterly updates. These reports were often required by programme management or by DFID directly. And those are the people who often read them: programme manager, team leader, DFID advisor. Technical members of the team do not often delve into the nitty-gritty. And due to the sensitive nature of the content we tend not to share these studies in their original form with civil society or international partners.

When you think about it, that makes for a pretty sad author-reader ratio.

PEA beyond reports

I completely understand and support the rationale for not sharing sensitive material. Aid programmes often have to strike a balance between partners that do not fully trust one another, and the privileged position external actors enjoy often makes for PEA conclusions that are ridiculously easy to weaponize in a local power struggle. But this rationale also drastically limits the potential impact of a written study, to the point that we must start thinking about the report as just Step 1 in what is to be an ongoing process of politically smart work.

That’s why we have a proposal for Everyday Political Analysis. That’s why the Thinking and Working Politically agenda represents a marked improvement over naked PEA. But while we have had plenty of frameworks and intellectual progress on what concepts and data a PEA study should look into, we have much less guidance on what an ongoing PEA process looks like.

In my 2015 briefing I did try to skecth out what different forms of engagement would look like, building on a core analytical goal and set of questions: the one-hour conversation, the one-day workshop, and the one-month report. At the time that sounded like a nice incremental journey from the most barebones and efficient type of PEA to the most comprehensive and sophisticated one.

But now I realize this type of guidance is only useful insofar as it is grounded in a broader programming approach that embeds PEA into strategy, planning, implementation, monitoring, and learning. And I have not encountered clear tools or frameworks for how to do that. Instead we are just making it up as we go – which is fine! But that leaves me profoundly dissatisfied, because the chasm between PEA theoretical guidance and PEA practical impact will continue to undermine the purported value of the TWP agenda, and the persistent invisibility of reports and recommendations will continue to hamper the systematic compilation of evidence in favour of more politically-informed aid.