From metaphor to analytic tool to evaluation practice

In a recent blog, I provided a basic introduction to Process Tracing through the Netflix show Making a Murderer. Process Tracing has been used in political science research in a wide variety of ways for a few decades, yet its use in evaluation is much more recent and it is still considered an insufficiently tested method (Masset and White, 2019). While there is some excellent guidance for research which helps to translate philosophical metaphor into analytic tool (George and Bennett, 2005; Bennett and Checkel, 2014; Beach and Pedersen, 2019), there remains limited accessible and practical guidance for evaluators and project managers (Punton and Welle, 2015; Befani et al. 2016).

In the paper Process Tracing as a Practical Evaluation Method: Comparative Learning from Six Evaluations, Alix Wadeson, Bernardo Monzani, and I take a further step towards improving practice. Going beyond theoretical best practice recommendations, we offer more in-depth, real-world, learning from six Process Tracing evaluations by early-adopters of the method over the last four years — CARE International, Oxfam America, World Vision Canada, and the International Institute for Environment and Development (IIED). We include reflections from our experience using Contribution Tracing in Ghana and Bangladesh, Contribution Rubrics in Côte d’Ivoire, and further adaptations of Process Tracing for global programming.

Our main intended audience for the paper is evaluators and programme managers considering attending Process Tracing training and what it might take to apply the method but struggling to find practical ways to do so.

The Measuring the Hard to Measure event at which the Ghana team shared their experience in using Contribution Tracing broke online attendance records at ODI, and thousands of people have read the Contribution Rubrics guidance. So, I’m confident the paper will be useful to some of you. We intended to share our learning at the Canadian Evaluation Society Conference this year, yet in these uncertain times, as I discussed in another recent blog, our operational assumptions to that end might be vulnerable. So, for now, we’re sharing the paper here virtually.

While we provide (a fairly heavy) discussion of the strengths and weaknesses of the method (e.g. evidence tests and Bayesian logic), the majority of the paper focuses on our practical learning related to how to get the most out of participation, the importance of theories of change, methodological decisions you need to wrestle with in practice, and how to mitigate biases for more participatory forms of Process Tracing evaluation. We also offer recommendations to improve practice and use, including various practical tips for evaluators and programme managers (which Alix will discuss in a follow up blog). We may also come up with a shorter synthesis at some point.

To provoke you to read the paper, here are five overarching recommendations to improve Process Tracing evaluation in practice:

Context, context, context: In Process Tracing, how strong your evidence is for a particular explanation depends on what it means in context. In some circumstances, a particular piece of evidence can offer a powerful indication of cause and effect, in other circumstances the same evidence may be worthless. Gold standard methods are a nonsense for complex change processes because no evidence is universally relevant. So, it’s vital to clearly take into account the role of other actors, existing relationships, and reflect on your underlying assumptions about how change happens (i.e. your theory of change) before conducing Process Tracing for either monitoring or evaluation.

Participation is worth it, if you’re careful: Making Process Tracing evaluation participatory can be extremely beneficial. While it may be time consuming, bringing project teams and partners into the process can help build evaluative thinking about evidence, it can also help identify what evidence means in context and distinguish between good and bad evidence. However, if you make the evaluation participatory, you have to be careful to address potential biases, as in all small-n impact evaluations. Including diverse perspectives can also add additional rigour, allowing you to scrutinise explanations in a different light through critical friends and peer review. Transparently assessing rival claims (through Bayesian logic), and partially blindfolding data collectors to what your theory is may also help make findings more credible.

Evidence tests and rubrics help achieve rigour: While formal evidence grading and quantification of confidence levels through Contribution Tracing can aide causal leverage, this may not always be feasible, given the level of capacity and time required for the method. If you have lesser capacity and time available, a relatively high level of rigour can be achieved by grading evidence alongside evidence tests and using simple rubrics to assess confidence levels.

Blend complementary methods: I’ve written a whole blog series on method bricolage, and like any good method, Process Tracing can strengthen other methods and be strengthened by them. Process Tracing can borrow from Realist Evaluation to make stakeholder motivation more explicit, including the word because alongside explaining who did what, when, and how. It can also steal outcome statements from Outcome Harvesting, making contribution claims more specific and thus more testable. And it can even take blindfolding from the Qualitative Impact Protocol (QuIP) to reduce confirmation bias, as recently proposed in Veil of Ignorance Process Tracing (VoiPT). What matters is finding the right fit.

Share learning: Finally, we believe we can only stand to gain by promoting transparency and dialogue on evaluation findings and processes, including the challenges we face in conducting evaluations. This paper aims to help spur further debate about the practicalities of applying this challenging but valuable method, especially as Process Tracing lacks a community platform, like various others (Outcome Harvesting, Outcome Mapping,Realist Evaluation, etc.). I wonder if it’s worth creating one? Given the Centre of Excellence for Development Impact and Learning (CEDIL) project has a half dozen grants for Process Tracing, perhaps that’s something they might be able to establish…?

In particular, we’d be interested to hear more from evaluators and practitioners about how we can use Process Tracing to help improve theories of change, how it might help support more effective monitoring and even aide adaptive management.

Watch out for Alix’s blog on practical tips for evaluators and programme managers next week.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Thomas Aston

Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.