Frederic Edwin Church, Cotopaxi

Tales of triumph and disaster in the transparency, participation, and accountability sector

Thomas Aston
5 min readAug 26, 2021

--

Written with Florencia Guerzovich and Alix Wadeson

It’s strategy refresh time

As Dave Algoso reminds us, it’s strategy refresh season. And, this season, the stakes are as high as ever. The Biden administration is figuring out whether and how to walk the talk on anti-corruption, the Open Society Foundations and the Hewlett Foundation’s Transparency, Participation and Accountability (TPA) Programme are doing a strategy refresh. The World Bank’s Global Partnership for Social Accountability (GPSA) is conducting a strategic review. These are some of the biggest players in the sector. Each has its own “niche” and approach to build a portfolio. In considering some possible new directions, the Hewlett Foundation has put out a consultation. Al Kags provided some thoughts on what to fund and even offered a wish list earlier this week. The Hewlett team asked an important set of questions for all of us which we have slightly amended:

  • How should we measure progress toward outcomes and in priority countries in a given portfolio?
  • How can we best contribute useful knowledge to the field through grant making, commissioning evaluations, and facilitating peer learning?
  • And what can a portfolio’s monitoring, evaluation and learning system do to link these questions together?

To answer these questions, we must first acknowledge a thorny issue — null results terrify us. Every time a new Randomised Control Trial (RCT) offers anything less than unambiguously positive results, we have Groundhog Day about whether the whole sector is a worthy investment or not.

Nathanial Heller captured this trepidation well for yet another study in Uganda about to publish its results:

A handful of initiatives have given the impression to donors that transparency and accountability efforts don’t work. One of these was the Making All Voices Count (MAVC) programme, which some donors (unfairly) called a “failure,” point blank in 2017. We’re now talking about how to avoid bad civic tech projects as a default rather than how to make good ones. Further, as one of us explains, two studies in 2019, the Transparency for Development project (T4D) and another from Pia Raffler, Dan Posner and Doug Parkerson found null results. These are the supposed disasters that caused collective consternation in the sector.

The conversation seems stuck in a vicious feedback loop. So, to demonstrate success, many rely on idiosyncratic cases and lean very heavily on a handful of country contexts which conducted a lot of RCTs or narrowed the focus of study to common tools (e.g. scorecards) and/or outcome measures (for an effort to standardize indicators). Many others have sought refuge through “pivots” and “innovation” rather than having a candid conversation about mixed evidence and what we might do (or not) to escape the (narrative) feedback loop. As ex-International Development Secretary, Rory Stuart, recently argued [talking about Afghanistan], “we have to stop [saying] ‘either it was a disaster or it was a triumph.’”

The myth of homogeneous and generalisable success

Despite this sage advice, one expert recently told the Hewlett Foundation, rather hyperbolically, that a “lack of evidence about the impact of TPA initiatives is now an existential threat to the field.” And one thought leader was said to have remarked that “the window of opportunity for social accountability will remain open only if we can surface evidence that social accountability is worthy of continued support.”

There are literally a dozen evidence reviews of the TPA sector which refute the claim that TPA efforts don’t make an important difference (we have read them, alongside hundreds of evaluations and studies). Evidence is certainly mixed, but it’s hardly absent. Part of the fear expressed recently is about heterogeneity. This is a nightmare for anyone that seeks to use evaluations to find generalisability from interventions about complex TPA processes. Many impact evaluators have opted to reduce interventions to a single tool looking at a single outcome, omitting too many components of the work, seeking findings about the “average beneficiary” that are universally valid and hold in all contexts. In the TPA sector, variation in outcomes in different contexts and sectors is something to be expected, not feared. We regularly assert that “context matters,” and yet we forget this when it actually matters. As Howard White and Edoardo Masset from the Centre for Excellence for Development Impact and Learning (CEDIL) highlight, we should focus on transferability — findings that tell us what contextual factors condition (or not) the transfer of a finding from one setting to another.

On balance, if you read the evidence reviews in the sector, the message is generally positive. A Qualitative Comparative Analysis (QCA) of the UK’s former Department for International Development’s (DFID) Empowerment and Accountability portfolio, for example, which looked at 50 projects in 2016 (prior to the sector’s apparent fall from grace) found that ‘service delivery is almost always achieved.’ Should this have been celebrated as a triumph? The review was largely ignored — perhaps because it wasn’t an expensive RCT in Uganda, or perhaps because success wasn’t homogeneous, nor was it unambiguous. Other ground-breaking reviews in the sector using realist methods (in this case, education) which present an array of mechanisms and outcomes and take context into account have also been largely ignored. So, either there is collective amnesia, a selective and millenarian reading of the evidence, or experts’ expectations of “worthiness” may be rather too elevated.

As Peter Evans of the UK Foreign, Commonwealth and Development Office (FCDO) explains, evidence reviews have their flaws. We would argue that many of them have unwarranted methodological biases and ironically some make grand arguments without much empirical evidence. Evans is also right that “no-one ever opens an evidence review and finds the perfect answer to their question.” But, when evidence reviews don’t quite cover it, that doesn’t mean that we should resign ourselves to the wisdom of a few researchers’ hot takes, loud voices in our echo chambers, or give undue credence to a handful of expensive impact evaluations (usually in Uganda).

The supposed “existential threat” is not primarily of empirical origin, but semantic and discursive. It’s about how the field, and donors that support it, have constructed what success and failure means. Let’s be careful not to throw the baby out with the bathwater; this isn’t a tale of either triumph or disaster.

The question for us remains — how might portfolio-level Monitoring, Evaluation and Learning (MEL) in the TPA sector build a more inspiring narrative to help make the case to continue investing in the collection of evidence of TPA’s impacts over the medium to long term?

In our second blog post, we’ll start answering this question. We’ll share insights from our work as MEL consultants working with different portfolios and connecting the dots across projects and portfolios.

--

--

Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.