We often criticise experimental methods like RCTs for failing to open the “black box.” Yet, how often do theory-based methods successfully open black boxes? How often do we unpack what we mean by a black box anyway?
When I hear the words “black boxes,” I can’t help but think of the American abstract expressionist artist Mark Rothko. Rothko once declared:
“I’m interested only in expressing basic human emotions — tragedy, ecstasy, doom, and so on… And the fact that a lot of people break down and cry when confronted with my pictures shows that I can communicate those basic human emotions….If you…are moved only by their color relationships, then you miss the point (Rothko in MoMA, 1999).”
Photos never do Rothko justice. He was prescriptive about the environments (or contexts) in which his paintings were supposed to be displayed; in dark lighting and never with paintings from other painters. Was he doing a control trial or was Rothko a realist? He was certainly a narcissist. Either way, in such a context, you don’t just see “colour relationships.” In the right light, the canvas appears to vibrate on the boundaries between colours. So, Rothko’s boxes are a good metaphor for the relationship between context and mechanisms.
The number of definitions of causal mechanisms in social science is legion. James Mahoney highlights as many as 24 definitions, and doubtless, there are many more. While we may not agree on what mechanisms are, we can agree on what they are not. Mechanisms aren’t laws. Unlike laws, mechanisms aren’t universal. As mentioned in the previous blog, mechanisms don’t always fire. They only fire under certain contextual conditions. As Jon Elster notes, while contextual conditions might be difficult to identify, mechanisms should be “frequently occurring and easily recognisable causal patterns.” And, as Falleti and Lynch point out, mechanisms should explain “how and why a hypothesi[s]ed cause, in a given context, contributes to a particular outcome.”
Realists officially define mechanisms as the “interaction between what the programme provides and the reasoning of its intended target population that causes the outcomes.” However, a recent review by Lemire et al. of 195 realist evaluations published in peer-reviewed journals between 1997 to 2017 found that only a minority of evaluations (31%) defined mechanisms in accordance with the original definition.
This suggests both that there is substantial disagreement among realists in terms of what mechanisms are, in theory, and that realists’ view of the world might not be so very different from other social scientists, in practice.
Westhorp (2018) recently expanded Pawson and Tilly’s (1997) original definition of mechanisms, but in my view, the main value added of realism lies in understanding mechanisms as the psychological drivers of reasoning. A focus on reasoning forces us to think about an actor’s motivation (i.e. why they did something). It forces us to look beneath visible phenomena, as Westhorp’s diagram below shows:
To illustrate, I recently asked what evidence we have on the effectiveness of sanctions in social accountability programming. Westhorp and her co-authors actually looked at this question in a review of accountability in the education sector a few years ago. One hypothesised mechanism was “big brother is watching” (actors respond in anticipation of the application of rewards or sanctions). Another other was “carrots and sticks” (actors respond to actual application of rewards or sanctions). They found little evidence for either mechanism in practice, but these are widely assumed to be key mechanisms in the sector despite this dearth of evidence.
It’s common in the accountability field to describe programme activities and programme outcomes and then merely assume the motivational impulses fit with the particular mental models we have (i.e. duty bearers are motivated by fear of punishment which comes from citizens’ “pressure”) without necessarily demonstrating evidence of why a duty bearer would reasonably be influenced by this pressure (e.g. a sense of fear or shame) and how this may thus have influenced their decision to change their behaviour. Ultimately, whether one is compelled by such an impetus relies on social context and the reasoning of individual actors in that context. Not all actors will be motivated by fear, but under stringent conditions, some might.
Process tracing mechanisms
Similar to realist evaluation, process tracing doesn’t have just one adhered to definition of a mechanism. However, the definition which makes most sense to me is “system of interlocking parts that transmits causal forces between a cause (or a set of causes) and an outcome (Beach and Pedersen 2019: 38).” The authors even cite (Realist) Roy Bhaskar as justification for this definition.
Process tracing should not be simply a descriptive narrative of events which logically follow in a temporal sequence (George and Bennett, 2005; Bennett and Checkel, 2014), but rather it should explain the causal links and relationships between causes and outcomes and between specific events (Falleti and Lynch, 2009; Beach and Pedersen, 2019). So, by looking at how events are linked together, process tracing is about cogs and wheels and explaining how causal forces are transferred.
In this way, most forms of process tracing are good at answering HOW questions. Process tracing is good at explaining the connections between events, often at macro or meso levels. This is what most uses of process tracing in political science have done over the last few decades. However, process tracing can also be employed at micro level, and at this level realist evaluation’s focus on reasoning can help add greater precision on WHY questions.
In the most granular form of process tracing, Beach and Pedersen (2019: 4) argue that a mechanism consist of entities (actors, organisations) which are the forces engaged in activities, and activities; the producers of change, which transmit causal forces, as the diagram below shows:
To entities and activities we can add a specific prompt (in red) questioning the reasoning of particular actors or organisations engaged in the process. Effectively, we can add a BECAUSE statement which will help show not only the relationship between parts 1, 2, and 3, but also help demonstrate WHY actors may have decided to take a particular course of action (i.e. reasoning in response to resources).
Indeed, as realist evaluation is also a configurational method, both the micro-level form of a mechanism expressed above and the meso-level expressions of Causal Process Observations (CPOs) in process tracing can be aggregated beyond a single case (as we see in the “carrots and sticks” and “eyes and ears” mechanisms). In this way, we can actually fit aspects of realist evaluation within process tracing and fit aspects of process tracing within realist evaluation. This is also allows process tracing to become more reconcilable with Elster’s definition of a mechanism as “frequently occurring and easily recognisable causal patterns.”
In the next blog, I’ll look at the supposed epistemological friction between realist evaluation and process tracing, and explain why this may be less of a problem than methodologists have claimed.