Unpacking the golden box
I finally got the chance to read the Accountability Research Center’s (ARC) recent study on “bottom up accountability” in Uganda. It’s a long read, but worth it because it offers some insights into a “black box of implementation.”
I’ve discussed opening black boxes before, but I wanted to offer a few thoughts on what “opening the black box” really entails in the accountability debate, and what implications this may have for practitioners and donors.
The takeaway I want to emphasise here is that what matters is not just what and how much you do, but when you do it, where you do it, and how you do it.
This sounds like an obvious thing to say, but as Angela Bailey and Vincent Mujune’s study demonstrates, we rarely do this in the accountability sector.
Gold standard’s mixed results
The paper rightly points out that what is believed to be known about the accountability sector has been dominated by the findings of Randomised Control Trials (RCTs). We rejoice when these find positive results and despair when these find null results. Hinging on the mixed results in only a handful of studies in only a few places (Uganda, India and Indonesia), the accountability sector sees itself as being in limbo. We assume that mixed evidence from a handful of RCTs is something we should worry about.
It need not be thus. The rationale behind the aforementioned despair emerges, in large part, from the various methodological limitations of RCTs. The ACT Health intervention attempted to model the influential Power to the People (P2P) study in Uganda. There were, in fact, many differences between the two interventions and studies (e.g. study populations, baseline conditions), and there are thus questions which Bailey and Mujune raise about the reasonableness of comparison. By design, RCTs and other experimental methods focus only on the what and how much, and generally assume that the when, the where and the how don’t matter much. But, of course, they do.
Context matters for complex programmes
The countervailing trend here is that context does matter. One aspect of this is methodological, another is strategic. Accountability interventions are not simple, but complex because they depend on people, power, and relationships. As per Brenda Zimmerman’s illustration of the Stacey Matrix below, accountability interventions are almost invariably in the complex zone because answers are rarely certain and stakeholders often disagree about what is right, as well as what will work:
Is dosage the answer?
Bailey and Mujune’s paper expends some considerable time explaining differences in what and how much was implemented in two phases of the programme. They emphasise that the first phase of the programme was “relatively low dose,” was “locally-bounded,” and they suggest that a “strategic multi-level approach” in phase 2 was more promising.
While “intervening (more) intensely” may make a difference to something like the effectiveness of non-violent civil resistance in triggering regime change, that is somewhat beyond the scope even for a large campaign.
As Tom Wein’s comparison of the P2P and ACT Health programmes suggests, how much (e.g. length of project, how many communities) one intervenes as an initiative does not provide an entirely satisfactory answer to the “what works” question. Indeed, as a number of studies show that even very “low dosage” interventions can lead to significant results, it seems that dosage may perhaps not be that strong a predictor of success.
The when, where, and the how
In my view, however, we need to unpack this further. We shouldn’t merely be thinking in the randomista logic of what and how much. It’s not unreasonable to assume that different results may have something to do with when, where and how interventions were implemented, not merely what their tactics were or how much they did.
I firmly believe that different tactics (or combinations of tactics) are likely to be effective at different moments in time, and/or in different places, and/or because of the way in which these tactics are implemented.
As CARE’s experience in Ethiopia, Malawi, Rwanda and Tanzania showed, whatever your tools (or tactics), they are likely to be most effective when they are adapted to context.
Accountability interventions are not easy to replicate because context changes, and context makes a big difference to what is possible. Quite obviously then, what tactics or strategy may work in one context five years ago may not work there now. Or vice versa, what didn’t work in that context five years ago may work now. It depends on the ideas, interests, and incentives of the actors involved in that context and the opportunities and constraints that context creates.
To take a few examples, just before the ACT Health programme in Uganda, the Human Resources for Health (HRH) Campaign demonstrates how, following closer engagement with MPs on the Social Services Committee (seemingly keen to expose the Ministry of Health) around health budget oversight, civil society actors pivoted from what was labelled a “gentleman’s approach” to one which the Ministry of Health considered an attempt at ‘sabotage.’ The confrontational pivot was effective in contributing to the government agreeing a higher staffing budget. Yet, some CSOs saw this as a ‘pyrrhic victory’ because their role ‘declined as a result of changes carried out by the executive immediately following the campaign’s budget victory,’ with reduced transparency from cabinet as a result. The HRH campaign shows that both the when and the how mattered. A window of opportunity opened, and then very quickly closed. The strategic pivot may have led to a breakthrough, but this choice triggered a significant backlash, which appeared to have a long tail.
The Central American Institute for Fiscal Studies (Instituto Centroamericano de Estudios Fiscales) in Guatemala, or ICEFI, made a similar pivot from insider collaborative approach to a campaigning outsider. It ‘took advantage of a major corruption scandal [just days after the La Línea scandal broke in April 2015] to call for new legislation to enhance accountability in the country’s revenue administration agency.’ And then, after the president and vice president both resigned and were arrested, with a new president (Jimmy Morales) elected who ran with the slogan “ni corrupto, ni ladrón (neither corrupt, nor a thief),” it had the opportunity to return to a strategy of “mutual collaboration” in a way that was not possible in Uganda in 2012. This case is labelled by the International Budget Partnership (IBP) as an “insider/outsider game,” and yet, it seems more accurate to describe it as an insider then outsider, then insider game.
Adapting Guerzovich, Gattoni, and Algoso’s diagram on windows of opportunity for anti-corruption reform, when we look at the dominant approach (collaborative or confrontational), the two cases appear to look something like this:
Alternatively, also in 2015, the Department for International Development (DFID) was designing the Partnership to Engage, Reform and Learn (PERL) programme in Nigeria (which I’ve worked on for the last year). The business case argued that DFID’s experience in programmes such as the State and Local Government Programme (SLGP) had shown that “it [was] often difficult to combine support for the executive arm of government with support for the legislature and non-state actors because this requires trust-building with government and encouragement of organisations who may often be critical of government.”
The State Accountability and Voice Initiative (SAVI), which followed SLGP but immediately preceded PERL, employed a constructive engagement approach, unlike SLGP. Some partners in SAVI had to “unlearn their ‘placard carrying’ mindset… [and it] took time and subtle advocacy from group members to convince stakeholders in the state government.” In general, this pivot to a more collaborative approach was seen to be significantly more effective and enabled PERL to become an integrated supply and demand programme once-more. However, with the recent #EndSARS protests, some civil society actors in Nigeria might well be considering whether they may wish to re-learn a placard carrying mindset (in some states, at least). Whether a new pivot would be effective (if it comes) and how much such an approach might derail the trust established with governent remains to be seen.
Hopefully, what the above illustrates to practitioners is that perhaps there is no one “right” path to success, but simply the right fit path for the context here and now. Going forward, we need to pay greater attention to the when, where, and how interventions find the right fit. We need to reflect on their strategic intent, how well they are adapted to context, and how best to harness opportunities to work both with and against the grain (though not necessarily at the same time).
Having a clear read on when and where a particular set of tactics will work, and how those tactics should be implemented to achieve the best results is tricky. On reading the above, David Jacobstein suggested to me that perhaps this should give donors pause, and maybe “step back from directing” (assuming there is clear “best practice” to be seamlessly replicated). Instead, perhaps, what they can do is help build relationships that could pay off later at the right moment, (fund and) share research on how interventions adapted their tactics and strategy over time (including a more thougtful reflection on potential backlash, similar to that discussed by Bailey and Mujune), as well as helping actors to consider windows of opportunity and what they might want to have ready when (and if) the moment arises.
Thanks to Angela Bailey for our discussion on ARC’s paper, to Florencia Guerzovich for helping me to adapt the diagram, and to David Jacobstein for his thoughts on some of the potential implications for donors.