“Real”​ process tracing: part 1 — context

Mark Rothko, Black on Maroon

When asserting the value of theory-based methods, you often here words like “black boxes” and “causal mechanisms.” These are commonly uttered to sell methods such as Contribution Analysis (CA), Process Tracing (PT) and Realist Evaluation (RE). Most commonly, the sales pitch revolves around providing an alternative (rather than a compliment to) experimental designs such as Randomised Control Trials (RCT) — see here for why simply critiquing RCTs gets us nowhere, and why instead, theory-based and participatory methods need to talk to one another.

As discussed in a previous blog series, method bricolage can add significant value to the evaluation field. Some efforts have been made to combine theory based methods before. John Mayne and Barbara Befani (2014) helpfully outlined how to combine Contribution Analysis and Process Tracing. Realist Evaluation and Process Tracing also have considerable potential complementarities. But, the overlaps and complementarities have rarely been acknowledged. Only yesterday, I discovered that INTRAC and Christian Aid had, in fact, developed some guidance for realist-inspired process tracing. Yet, it appears this largely fell on deaf ears. As I wrote this blog back in February, I shall play Alfred Russel Wallace to their Charles Darwin and make the argument for why RE and PT need to talk.

Notwithstanding the odd passing reference to realist philosopher Roy Bhaskar and to Alexander George and Andrew Bennett’s work on case studies, seven canonical books on Realist Evaluation and Process Tracing since realistic evaluation was published two decades ago don’t cite each other’s work (Pawson, 2006: Pawson, 2013; Pawson, 2018 for realist evaluation and George and Bennett, 2005; Bennett and Checkel, 2014; Beach and Pedersen, 2016; 2019 for process tracing). This is perhaps best illustrated by (Realist) Ray Pawson’s mention of “other disciplines:”

In my view, Realist Evaluation and Process Tracing have a lot more in common than methodologists are willing to admit. Methods new and old are looking to position themselves as if they are the solution, but in practice, we have plenty to learn from one another. In the table below I highlight some of the similarities and some potential differences between RE and PT we will discuss in the blog series:

Context, Mechanism, and Outcome statements (or CMOs) are the building blocks of Realist Evaluation (Pawson and Tilly, 1997). So, we can consider many of these similarities and differences by looking at how context and mechanisms are understood and then reflect upon some of the practical implications in potentially reconciling differences.

Context is all

As Huey Chen (2015) has argued, “we should judge a programme not only by its result but also by its context.” Evaluating the merit of programmes requires an appreciation of context for any explanation of achievement. To this point, Chen emphasises that programme interventions are “open systems” rather than closed systems (as in biology). This means they are affected by culture, social norms, economic conditions and various other contextual factors. Much as randomistas may protest, these are often very hard (if not impossible) to control for. Chen calls “ecological context” the context which directly interacts with the programme. These might be the social, psychological, and material supports required for service users to use that service, for example. As we will see, both RE and PT literature argue that contextual factors trigger mechanisms.

As realist guru Gill Westhorp notes, the contexts in which programmes are embedded make a difference to the outcomes that are generated. For example, tennis balls don’t bounce the same way on a tennis court, as in space, or under water. Contexts are more commonly the salient aspects of circumstances, situations, or groups. These are generally (but not always) visible phenomena. They may play a causal role (i.e. they may be necessary), but they do not directly cause the outcome. You have to add something else for it to make sense as a causal process.

In process tracing, just as in realist evaluation, you should specify the contextual conditions that must be present for a mechanism to be triggered and for an outcome to happen. Unlike Christian Aid’s paper, I believe that is where we should to start when combining PT & RE.

And while context is less explicit in most PT, the most eloquent of process tracers, Derek Beach and Rasmus Brun Pedersen (2016) do highlight the importance of context. For instance, they argue that a car is a mechanism that transfers causal forces from a cause (burning of fuel) to the outcome (forward movement), but it needs oxygen to do this (Beach and Pedersen, 2016). If the oxygen is necessary, then it plays a causal role, even though it is formally considered context. The same mechanism in a different context might produce a different outcome, or no outcome at all (Falleti and Lynch, 2009). Thus, the take away is that mechanisms only operate sometimes. You should be able to imagine circumstances in which it won’t work. Think about tennis balls underwater or cars in space.

Next time, we will open some black boxes.

I'm an independent consultant specialising in theory-based and participatory evaluation methods.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store