Tom Thomson, Northern River

(Re)making the case for adaptive management

Thomas Aston
10 min readJun 5, 2022

--

Christian Aid Ireland’s recent publication The Difference Learning Makes by Stephen Gray and Andy Carl made a bit of a splash. The study found that Christian Aid Ireland’s application of adaptive programming contributed to better development outcomes and supported more flexible delivery. The much vaunted MUVA programme in Mozambique is also coming to a close and presenting its results from using and adaptive approach.

So, it struck me that we might be at a critical juncture in the conversation on adaptive management. We’ve had the crashing to earth of inflated expectations in recent misanthropic reflections, alongside a fragile institutionalisation of adaptive management in donor agencies, NGOs, and private sector organisations. However, I’d argue that we’ve reached the point where adaptive management has passed the proof-of-concept stage.

Adaptive mindsets

In my view, adaptive management and thinking and working politically (TWP) have taken off to the degree that they have not because of the evidence base to support it, but due to champions within donor agencies and implementing organisations who “get it” intuitively, who had the courage and risk appetite to experiment, and the reflexivity to learn.

Finding donor champions

In the case of DFID/FCDO, it’s relatively easy to identify and name some of these champions. They include: Pete Vowels, Richard Butterworth, Chris Pycroft, and Sam Waldock (who are still on the inside), Graham Teskey, Wilf Mwamba, Tom Wingfield, Laure-Hélène Piron, and Heather Marquette (now on the outside), and the late, great, Sue Unsworth. A relevant potted history of part of the story can be found in Is DFID Getting Real about Politics?

It’s also noteworthy that these “champions” wrote up quite a lot of what they thought. That is, they weren’t (and aren’t) just responding to an agenda but also setting one. I’ve linked some of their writing above, but you can see that they co-wrote guides on everyday political analysis, papers or briefs on addressing the political dimensions of development or politically smart and locally led development, blogs on adaptive programming and on the politics of why (some) governance programmes work to name a few.

While change is obviously institutional, it’s also individual, and it’s hard to get round the vital role of individual champions inside organisations pushing for change and the importance of their status and standing inside those organisations.

The champions above are people who don’t need convincing (they probably never did), but they do need some institutional support. As one staff member from Christian Aid Ireland in Sierra Leone put it, “unless the right culture is there, none of our tools or approaches would work.” As Emma Proud reminded me, implementers also need the right kind of mechanisms (whether within a donor, implementer or consortium/project) to manage adaptively, such as that which Alina Rocha Menocal and I found in the (also much vaunted) Partnership to Engage, Learn and Reform (PERL) programme.

Donor champions further need some evidence to back up their case. It will likely be the same in other donor agencies where there might be innovators and/or early adopters who might have the right adaptive mindset, but they can’t make the case on their own to their luddite peers.

As the donor environment for adaptive programming has worsened, there’s also been an increasing exodus of FCDO staff. My concern is that following the peak of inflated expectations, we might not have enough champions to muddle through the recent trough of disillusionment.

Gartner hype cycle

As many organisations cravenly follow the tides of donor preferences, if we are to sustain the case for adaptive management (of whatever form), this exodus may be a bigger challenge than many appreciate.

I take the peak of inflated expectations to be 2016, if you consider the volume of publications in ODI’s database:

GLAM Database, Cases, 2006–2022

Marshalling the evidence

A graph like the one above might seem dispiriting, and Kathy Bain emphasised to me, the battle has not yet been won. We may “still be on the back foot,” given challenges in the wider political context (at least in the UK). Yet, there is at least some evidence for hope in the slow accumulation of the evidence base which demonstrates that adaptive programming can be effective, and there is plenty of real-world usage.

We know that evidence alone can’t make a big difference to policy making processes, but it can help to convince some people who believe in the value of evidence in the first place (assuming that they consider that evidence to be good quality and credible). Hernandez et al. (2019) argue in favour of four steps for strengthening evidence-informed adaptive management:

1) Establish the need for evidence in adaptive management (why, what and how);

2) Consider the appropriate types and levels of evidence;

3) Assess the robustness of that evidence, including whether and how it can be used for decision-making;

4) Ensure the basis of adaptive management decision-making is sound, transparent, and documented.

I generally agree with this, but we must be careful not to fall into the trap (which Gray and Carl’s report did) that because we can’t do a Randomised Control Trial — RCT, we can’t really know much about what works, how, and why. Not only is it basically impossible to conduct an RCT (or other experimental methods) on adaptive programming, it’s totally inappropriate to do so even if it were feasible, as I’ve discussed previously.

We further need to dispel the “best practice” myth that the same thing will work everywhere. Adaptive management isn’t supposed to solve all the world’s problems in the same way everywhere (quite the contrary, we’re talking about “best fit” programming).

In Christian Aid Ireland’s report, we found the following plea from a donor: “we need to hear the evidence to justify those investments.” So, what would you use to convince them that adaptive programming is worth the investment?

The United States Agency for International Development’s (USAID) Learning Lab is one step ahead of me. If you check their Collaborating, Learning, and Adapting (CLA) Toolkit landing page you’ll see the following on Making the Case for CLA (one key type of adaptive programming):

‘A growing body of evidence indicates that collaborating, learning, and adapting contribute to improved organisational and development outcomes:

See their literature review and CLA case competition analysis for more information. There’s even a case competition map which shows that CLA is global, and there’s a whole set of guidance and tools on adaptive management, including on context-driven adaptation.

It certainly helps when donors’ own publications say this kind of thing, because champions can point to their own evidence to make the case to their peers. The same applies for implementing organisations.

There are many papers which make a convincing case for adaptive programming. Here’s my top 5:

The theoretical case is clear; traditional approaches generally fail to tackle complex (or wicked) problems. More problem-driven, politically-smart, locally-led, and adaptive programming offers a potential alternative.

By now, there’s a relatively large body of evidence on adaptive programming. GLAM’s Adaptive Development Zotero features 112 cases (or rather, documents) between 2006 and 2022. Consider that systematic reviews tend to make generalised (but potentially credible) statements based on a few dozen cases (sometimes less). However, we shouldn’t just cherry pick our favourite cases of success.

There are some good reviews assessing between a handful and thousands of development projects. Here’s my top 10 recommendations:

There are, of course, many more examples that I could have included from other large INGOs or private sector organisations (e.g., Chemonics, Palladium, DAI) — see here, for a position paper on Doing Development Differently (remember that?) from World Vision, IRC, Mercy Corps, Oxfam, and CARE written by Dave Algoso. I think it’s fair to say that adaptive management is becoming a new orthodoxy for large international development organisations, at least in the creative corners of their programming (before it all gets relabelled as design thinking).

There are also now a few good longitudinal studies which show that adaptive management can achieve effects over the medium-term:

The conclusion is not that adaptive management is a magic bullet, but there is now a large body of evidence that merits policymakers’ consideration.

It’s been argued that the evidence base is too dependent on potentially biased, self-selected case studies. I think this is certainly fair criticism during the peak of inflated expectations in 2016, but this is less true today, and study quality has definitely improved in recent years. Dasandi et al. (2019) further argued that interviews, documentary analysis, and action research are intrinsically ‘(in)appropriate for establishing causal explanations.’ Chris Roche, Marta Schaaf, Sue Cant and I explain why this is misguided methodological criticism. Case-based approaches are a far better fit for adaptive programming than experimental methods, and it’s high time we reappraised the absurd straightjacket of randomista rigour.

As a critical appraisal, I’m concerned by the over-representation of publications commissioned by DFID/FCDO about themselves, perhaps too many of which were produced by my previous employer, ODI. Far too much of the evidence comes from Nigeria, and too many publications were written by people who look and sound like me. Nonetheless, the case for adaptive management is certainly defensible.

If you’re still hungry for more you can check out Alan Hudson’s open and adaptive archives, subscribe to the #AdaptDev google group, the TWP Newsletter, and GLAM’s Adaptive Development Zotero.

For those that are interested, there are also some good (but now rather out-of-date) studies on what donors learned from doing adaptive programming:

Of course, there is much more that can be done.

We could do with a “state of evidence study on adaptive programming” (a synthesis of syntheses) detailing what we know about the effectiveness of politically-smart, locally-led, and adaptive approaches in different contexts (are there differences between the findings on adaptive management and TWP, for example?). There’s no denying this is difficult (and politically contentious job), but I think it’s time.

It would also be interesting to develop cases of where adaptive approaches may have averted failure (if indeed they really have), and even to estimate the potential savings to make a value for money argument for or against adaptive programming. After all, some donors only really care about the cost-benefit analysis.

We could benefit from some more self-critical accounts of when, where, and why adaptive approaches have failed because they poorly implemented or were perhaps the wrong approach to take in context. The new orthodoxy has plenty of positive bias, and it’s time we drank some of our own medicine lecturing others on the importance of failure.

It might also be helpful to have a comparative analysis of the uptake of adaptive management in donor agencies and to take the temperature today in the shadow of Trump, Morrison, Brexit, COVID-19, the war in Ukraine, etc. This would offer a more sober assessment of the potential for institutionalisation in varied political contexts.

And, we can certainly make a far clearer link to the localisation agenda. We ought to have done this long ago. It’s pretty embarrassing that so little has been done.

For me, the issue of the day is no longer whether there is any credible evidence that adaptive programming can work and that it can merit the investment. Instead, it’s about effectively marshalling contextually relevant evidence, and better understanding how to prompt some degree adaptive “tolerance” inside organisations beyond innovators and early adopters.

I wrote this blog with a view to focus on the evidence. But I’ll say it again that evidence is only a (small) part of any future story of institutionalisation. At the end of the day, whether any of this will stick relies more on how good thinking and working politically folks are at actually thinking and working politically.

Thanks to Alan Hudson, Kathy Bain, and Emma Proud for suggestions, particularly for some of the recommendations on what is still to be done.

--

--

Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.