Frank Bowling, Africa to Australia

Levelling the playing field in evaluation methods access

Thomas Aston
10 min readFeb 8, 2022

--

#EvalSoWhite hasn’t exactly taken off as a hashtag, has it? I’m quite sure there were good intentions in its conception, and I’m also reasonably confident that the prevailing discourse in the evaluation field is moving in a positive direction. But, I can’t help but feel that little has change substantially. What is actually more accessible or more equitable now than a few years ago?

To me, three areas seem particularly important to address to level the playing field:

  • Language accessibility;
  • Supportive networks, and;
  • Affordable training.

I’m sure there are many more worth considering, but these are some areas where I think a little money and time can go quite a long way.

Mejor Evaluación?

I explained in a blog a few years ago how the evaluation field creates niches, linguistic edifices, and is full of jargon. A large part of this is linguistic, as most monitoring and evaluation literature was written in English; evaluation remains defiantly Anglophone.

Thomas Delahais of Quadrant Conseil told me that, in his view, the issue isn’t so much training material, but accessible material supporting and displaying diversity. Quadrant Conseil’s impact approach tree is a really helpful way of displaying some of this (and rather simpler than CECAN’s choosing appropriate methods tool). Yet, he told me that he and his colleagues were “stunned how much easier it was to understand these texts in our native language [French] though we already knew them.” This bears repeating. As it turns out, I actually first learned monitoring and evaluation in Spanish. I agree with my namesake, it is considerably easier to learn this stuff in your native language.

Methods such as Outcome Mapping and Most Significant Change (MSC) communities made a deliberate effort to translate materials more than a decade ago, and BetterEvaluation has also translated a number of other theory-based and participatory methods into Spanish. Hats off to BetterEvaluation, in general. But the fact that it took until September 2019 to translate many of these methods suggests a wider problem of uneven language access. This an eminently solvable problem even with quite limited resources.

In my previous blog on evaluation jargon, I also explained a bit about how cross-cultural translation also matters. This is not (or shouldn’t be) a one-way street where knowledge is simply be transferred from North to South, nor only from English to other languages. But, there remains both a language and cultural premium in the evaluation field which we need to take into account rather more purposefully.

A rising tide will lift all boats?

Despite language inequities, there seems to have been an expansion of global and regional evaluation networks in recent years. At least, in theory, regional evaluation networks ought to decentre and recalibrate what is valued and whose values and valuing matters. Whether they do in practice, I’m not sure yet, but there seems to be plenty of promise.

One network which caught my eye last year was the Network of Impact Evaluation Researchers in Africa. The William and Flora Hewlett Foundation provided some seed funding for the network, and this is certainly to be commended. Should I be troubled by their address, though — United States International University — Africa Off USIU Road?

Some networks have been around for a few years now. The Red de Seguimiento, Evaluación y Sistematización de Latinoamérica y el Caribe (ReLAC) stretches back to 2015. The West Africa Capacity-building and Impact Evaluation (WACIE) programme has been around for several years, as has the CLEAR Initiative. Some evaluators in both North and South may also have participated in gLOCAL Evaluation Week, part of the Global Evaluation Initiative (GEI). There are probably many others I’ve missed. Nonetheless, the rise and growth of such networks should make a difference both to increase capacity, and hopefully, to increase the diversity of methods and perspectives we seek in evaluation more generally.

What about method training?

This is perhaps my biggest area of interest. A scoping study of Impact evaluation capacity in Africa in 2020 by the Africa Evidence Network found 520 African researchers with African affiliations across 34 different countries which have authored 490 impact evaluation publications between 1990 and 2015. South Africa has the most published impact evaluation researchers, followed by Kenya, Uganda, Tanzania, and Zambia.

But, this capacity still relied heavily on costly training abroad. Almost 40% of respondents to the survey said that they received impact evaluation training at universities outside Africa. The majority of respondents received training at European universities in the Netherlands, Germany, and France, and some had received impact evaluation training in the USA and Canada. Most accredited courses focussed on monitoring and evaluation in general, not impact evaluation more specifically. But, impact evaluation was an important part of the story.

The distracting cottage industry of RCTs

Partly because of how some economists define “impact evaluation,” a substantial part of the training story appears to be linked to Randomised Control Trials (RCTs) and much of this capacity is said to be concentrated in a handful of schools of public health at elite universities in the Global South. Makerere University in Uganda is one such example.

The American Economic Association (AEA) RCT Registry shows that an extraordinary proportion of RCTs are conducted outside the USA. The Registry currently has over 4,500 RCTs across 159 countries, with more projects taking place in African than any other region.

Entries by study region and year, 2013–2020

I don’t think Martin Ravallion is far off here, unfortunately:

But what if I don’t want to learn how to do an RCT? What if RCTs aren’t appropriate to or feasible for the programme I want to evaluate? Shouldn’t programmes and evaluators in the Global South have some choice in what methods suit their purposes? And also who should be doing this training?

As Sarah Lucas and Norma Altshuler point out, this isn’t (and should not be) simply about foreign advisors or trainers filling African (or other regions’) capacity gaps, but (where possible), leveraging “existing capacities in individuals, organizations and systems,” while aiming to make “better use of local talent and capabilities.” Lucas and Altshuler are right that this is certainly about power and equity. And yet, relying on 3ie and J-PAL (with their methodological hierarchies ^) may run the risk of further concentrating that power rather than diffusing it. It’s time we recognise that most of this problem is of our own making in the North.

Estelle Raimondo of the World Bank’s Independent Evaluation Group (IEG) and Thomas Delahais both suggested that one key part of the problem is the difficulty of convincing commissioners to experiment with a less-known or more advanced/rigorous methods such as Process Tracing or Qualitative Comparative Analysis (QCA), for example.

Methodological capacities and preferences obviously vary across regions. Florencia Guerzovich reminded me that, beyond RCTs, there are large qualitative method networks either anchored in Latin America’s elite universities (or Latin American scholars trained in the US by method gurus). See this excellent list from Raul Pacheco-Vega on Process Tracing, for example, which clearly demonstrates Flor’s point:

But, not much of this appears to have translated across to evaluation. How might we both diversify graduate school curricula (^) and make more effective use of (post)graduates’ knowledge?

It’s also entirely possible that many of the researchers and/or their students who have the “right stuff” haven’t ever considered evaluation work. Perhaps some don’t realise the knowledge they have can be useful for evaluation, or simply don’t have the right access and contacts to redeploy their (useful) research knowledge. I imagine that the mutual pretentiousness of research and evaluation communities doesn’t help much either.

CLEAR LAC’s 3rd edition of the diploma in qualitative evaluation

This diploma from CLEAR seems to offer a helpful, albeit quite expensive, general introduction to some qualitative method basics (that the course description is in English illustrates my point above). But it doesn’t quite respond to increasingly specialised and advanced method demand.

Estelle further told me that we need to make sure that people who receive methods training or work towards developing methodological skills and expertise are then able to connect to the demand. This is where she believes Young and Emerging Evaluators networks, or initiatives such as Peer-to-Peer (P2P) efforts are crucial. But, they’re not quite there yet.

I say this because I’ve had a number of requests during the pandemic about whether I know people who know X method in countries X, Y, and Z. I also increasingly see Terms of Reference which ask for specialised knowledge in particular methods such as Outcome Harvesting, Realist Evaluation, or Contribution Analysis.

As Flor mentioned, drawing on the Global Partnership for Social Accountability’s (GPSA) experience, supply doesn’t match the demand for these types of methods. Finding evaluators that are proficient in this type of methods, know the context and local language(s), are available, and are willing to work for quite small budgets is a lot more difficult than you think. Estelle agreed that finding the “miracle consultant” has been a real problem for evaluation commissioners. Estelle and Thomas pointed out that there’s also the challenge of finding a consultant (or consultants) who not only know the method technically, but can effectively translate technical findings into plain language. These are not necessarily complementary skill sets.

I’ve been lucky enough that I have the right kind of passport, earn enough money to buy (expensive) evaluation books, travel to evaluation conferences, and attend specialised trainings in these sorts of methods. Yet, very many can’t. In my view, foundations should not only be funding RCT training or basic M&E courses so that when Americans and Europeans parachute into Africa (or elsewhere) they have some local evaluators that can be their willing (and supplicant) data collectors.

If philanthropic organisations really care about equity and power, then they should also be providing more bursaries to attend conferences and fund trainings in a variety of methods. I’m told that some do (the UN, for example), but also that fewer and fewer agencies are willing to support conferences and bursaries. This seems to be a mistake. Without this support, it seems difficult to envisage how we might entertain more equitable access, or even differentiated pricing and fee waivers to balance the playing field. It’s also important that more evaluation training in diverse methods takes place where evaluations will be conducted.

One of the precious few benefits of the COVID-19 pandemic has been that more training is available online than ever before.

I’ve done a little bit of this myself. In 2020, I supported Kaia Ambrose to do an online training on Outcome Mapping and Outcome Harvesting in Spanish for the Outcome Mapping Learning Community (OMLC) — training costs covered only those of the OMLC. I also did a training on Process Tracing with Alix Wadeson at the European Evaluation Conference (EES) in 2021. I’m not sure what the exact cost of the training was, but the fact that it was online meant we could have participants from Indonesia to the USA in the same training and at a lower cost for those participants. Both trainings were fully booked. But, even online conferences and method community trainings are still potentially out of financial reach for many. Clearly, this is where foundations or other donors should step in. But, without this additional support, what else can we do to level the playing field?

Making evaluation methods more accessible

One simple and easy thing I think we can do, just like the calls for open journal access, is to make some more training publicly available. This came to mind when I was recording a training myself a few weeks ago on Contribution Analysis. I also noticed that Linsday Mayka had made her method course available online and thought about what else I might find. Who are some well known methodologists and what did they have freely available online?

There’s more than I thought. I found that several method communities have their own YouTube pages. The Centre for Advancement in Realist Evaluation and Synthesis (CARES) has its own YouTube page, as indeed does the Outcome Harvesting Community, and the Institute for Qualitative Multi-Method Research (IQMR) also has a page. I found some pretty eminent methodologists, including Derek Beach and Andrew Bennett on Process Tracing and case studies, Gill Westhorp on Realist Evaluation, Kaia Ambrose and Simon Hearn on Outcome Mapping, Heather Britt on Outcome Harvesting, Jess Dart and Rick Davies on Most Significant Change (MSC), among others. So, I made a YouTube playlist for these introductions:

Clearly, you can only get so much out of a 60 or 90 minute introduction to methods which generally take a few days to learn and apply in a workshop. But, it’s start. And there’s clearly further this can go if longer and more tailored trainings become publicly accessible (or are made within regional evaluation networks). What else might we add to this list? Can we imagine doing this in different languages?

As David Guzmán Matadamas suggested to me, perhaps this is the sort of thing that should be housed on a platform by a university or think tank. In that way, as he pointed out, bigger players might have the confidence to support it financially. Maybe, just maybe, that should even be a university or think tank in the Southern hemisphere. To me, this seems consistent with Hewlett’s aims, for example:

Yet, as Jewlya Lynn, Sarah Stachowiak, and Julia Coffman suggest in a recent paper Debunking Myths About Causal Analysis in Philanthropy, philanthropic organisations still need to have a much more open mind about the methods, forms of evidence, and data they value.

It’s also worth searching around more broadly to see what else is publicly and freely available. I recently found a good free introductory course to Qualitative Comparative Analysis (QCA) by Coursera from Erasmus University, Rotterdam, which is helpful even if you can’t find or afford the course book. What other good free courses are out there we can recommend?

What else can we do to put our money and/or collective knowledge where our mouth is?

Thanks to Estelle Raimondo, Florencia Guerzovich, Rick Davies, Thomas Delahais, Cathy Shutt, David Guzmán Matadamas and Penda Diallo for comments and suggestions.

--

--

Thomas Aston
Thomas Aston

Written by Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.

No responses yet