Valerio D’Ospina, Biking in White Woods #2

Complexity and theories of change: redux

Thomas Aston
11 min readJul 17, 2023

--

I’ve been reading lots of critiques of planning, prediction, and theories of change recently. I’ve written a fair bit on theories of change on this blog: on how to make theories of change more useful, the pitfalls of theories of change, the difficulties of distinguishing outputs and outcomes, the boundaries of influence, foresight and theories of change, paying attention to assumptions, and on how to connect assumptions and triple loop learning. I’ve also highlighted the perils of evaluating complexity simplistically and of ignoring context. I think it’s important to be reflexive and stress test the approaches and tools we use. So, here goes.

Of course, I’m not the first person to think about this. Over a decade ago, Patricia Rogers wrote an influential article Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions. I figured it was time for an update. The evaluation field is divided on whether theories of change are reconcilable with complexity or not, and several issues continue to linger.

In their 2018 paper, Thinking Big, New Philanthropy Capital (NPC) attempted to bring theories of change and system thinking together more explicitly. They highlighted five major pitfalls, which are a good place to start. Synthesising these with other common critiques (e.g., Mowles, 2014; Jenal and Liesner, 2017), I’d argue that the most important pitfalls are the following:

(1) Neglecting context and related blind spots;

(2) Over-confidence in “our” influence;

(3) Fallacy of unidirectionality;

(4) Misplaced beliefs about stability and certainty;

(5) Failure to adapt, and capture emergence.

I’ll briefly walk through each of these and point to some of the recent work done to address these pitfalls.

Sensitivity to context

Theories of change have often been criticised for not giving sufficient attention to the wider context in which an intervention is taking place, and the nature of the interactions between the intervention and this wider context. For example, in 2014, Craig Valters highlighted the recurrent problem that:

“Organizations imply that change in a society revolves around them and their program, rather than around a range of interrelated contextual factors, of which their program is part.”

In my paper with Marina Apgar on bricolage, we looked at 18 theory-based methods. Relatively few of these explicitly paid attention to context as a key step in the process. So, I think this is an evaluation methods problem, in general. Often, we assume that projects/programmes will carry out political economy analysis, power analysis, or other analyses as part of the process. But, even if they do, we know that such analyses often sit on shelves unread.

In a special issue of Evaluation on complexity, Helen Wilkinson et al. wrote an article on Building a system-based Theory of Change using Participatory Systems Mapping. Like others (e.g. Alford, 2017; Davies, 2018, Abercrombie et al. 2019; see also DESA Research, 2022), they suggest that systems mapping and theory of change diagrams can be usefully integrated. They argue that the ‘process of a Participatory Systems Mapping exercise is designed to focus on the whole system rather than on the programme or intervention first.’ They argue that it provides a broader and deeper understanding of the way complex systems operate. In theory, the analyses I mention above might also accomplish this understanding, but I can see how having an initial systems map prior to developing a theory of change could be a useful step.

Overconfidence in “our” influence

A second habitual problem is an overconfidence in “our” intervention/project/organisational influence. In his recent critique of demonstrating impact, Toby Lowe underscored that ‘your work is a small part of a much larger web of entangled and interdependent activity and social forces.’ Relatedly, last year, Duncan Green argued that:

“We need to distinguish between theories of change (how the system itself is changing, without our involvement) and theories of action (the small differences we can make, usually in alliance with others). If theories of change start by putting us at the centre of everything, that is a serious problem — we almost never are. But I lost that battle, so let’s stick with [theories of change].”

Even if he lost the terminological battle, there’s still an important point here. Our influence is limited, and many other actors play an important role in achieving any significant outcomes. I didn’t find the distinction itself especially helpful though, given that there are interrelationships between (1) identifying what you can directly influence, and (2) acknowledging what you can’t, what others can influence, and how you can potentially respond. I think the risk of Green’s distinction was, ironically, to focus too much on theories of action, and what is within your direct control.

Conversely, many took the shorthand from Outcome Mapping to consider our spheres of control, influence, and interest, as this diagram from Clark and Apgar shows:

Clark and Apgar, 2019

The key takeaway: “Stop trying to change the world; focus on your sphere of influence.”

Another way of looking at this in recent years has been to look at how your efforts fit within wider change trajectories. As Boru Douthwaite et al. (2023) put it in their paper on Outcome Trajectory Evaluation:

‘Project outcomes are not single, one-off events; rather, they are generated over time by an interacting and co-evolving system of actors, knowledge, technology and institutions.’

They refer to this system as an outcome trajectory. Outcome trajectories are constructed by working backwards from existing outcomes rather than forwards from inputs to impacts. Douthwaite et al. (2023) found that ‘the outcome trajectory of interest was itself nested within broader outcome trajectories.’ For instance, to understand how a ‘draft declaration by the African Union supporting biofortification emerged, they had to also understand the broader outcome trajectory that led to the development and introduction of biofortified crops into Africa in the first place.’ I guess you might also call this outcome nesting.

Nesting is another growing practice, in general. John Mayne proposed nested theories of change to disaggregate theory of change pathways into more manageable parts, and several others have followed suit with nested actor-based theories of change. Dave Snowden argues that ‘given the interconnections and entanglements between parts,’ complex systems ‘can’t be broken down into smaller pieces.’ Though, it’s widely argued that systems are nested (Meadows, 2008; Juarrero, 2010; Boulton et al. 2015; Bicket et al. 2020). Hence, nesting analyses shouldn’t be perceived as an original sin.

Working in the opposite direction, Michael Quinn Patton’s theories of transformation pools multiple big picture theories of change. I think his aspiration is far too grand, hubristic, and impractical. In my view, one of evaluation’s great failings in recent years is the belief that if we simply evaluate more, there will be more transformation. Evaluation’s skeptical turn is a necessary corrective in my view. Nevertheless, it’s worth remembering that the black box is full of many theories. Rather than theory of change, Carol Weiss therefore referred to ‘theories of change.’ Realist mechanisms might also be viewed as mini-theories of change as the figure below from Westhorp et al. shows on the effects of accountability on education outcomes:

Fallacy of unidirectionality

Thirdly, critics often complain that theories of change are too “linear.” What they’re actually referring to, in most cases, is the unidirectional flow from inputs to impacts in theory of change diagrams, rather than linearity. The causal pathways above look like this for diagrammatic parsimony, but they actually reflect sequential, parallel, and feedback-loop pathways. Rogers reminds us that logic models (and theories of change) tend to show one pass through the intervention, ignoring virtuous or vicious circles. Patton (1997; 2011; 2015) has long expressed his concerns about this problem and he argues that links in theories of change, especially between outcomes, are often recursive rather than unidirectional. Patton is also right that that some cause-effect relationships are mutual, multidirectional, and multilateral.

There have been several attempts to address issues of unidirectionality of theory of change diagrams (e.g. Mayne, 2015; Stroh, 2015; Davies, 2018). One key part of this is a focus on feedback loops. Yet, as Rick Davies points out, including feedback loops in theories of change diagrams tends to be uncommon. A major reason is because this tends to make them more illegible. Diagrammatic parsimony will always be a limitation.

One way around the issue of illegibility, as discussed above, is to develop a systems map prior to developing a more focused theory of change. In many respects, developing systems maps prior to a more focused theory of change is another form of nesting, by focusing on systems and sub-systems. You could even have a theory of action within a theory of change within a systems map if you really want, and if you have the time to do so.

Even if many factors are connected, some may be enablers, other detailers, some may be strong ties, others weak ones. The main thing systems maps can help with is provide greater attention to potential feedback loops and denoting whether these are positive, negative, and strong or weak. The most important of these you might zoom in on, as the diagram below shows.

Wilkinson et al. 2001

Misplaced beliefs about stability and certainty

The natural product of hubris and unidirectional arrows is a misplaced belief about stability and certainty of achieving change. In my experience, few people believe that outcomes or impacts are certain. Nancy Cartwright et al. have done some recent work on making predictions of programme success more reliable through theories of change. I’ve never been entirely comfortable viewing theories of change as predictive models. Whether they are designed to predict impact is highly debatable.

A rather better framing, in my view, is assessing degrees of uncertainty. People are often lazy about developing assumptions in theories of change and list a series of factors which may be highly uncertain which they nonetheless state that the hope will hold. In Causal-Link Monitoring, Heather Britt and Richard Hummelbrunner focus on how likely critical assumptions are to fail (i.e., how vulnerable they are). I always found this to be a better understanding of assumptions.

Cartwright’s work emphasises moderating factors: “support factors” and “derailers.” The presence or absence of these factors by no means guarantees success or failure, but they help us to think more seriously about what factors may need to be in place (or absent) in a particular context for a programme to make a difference (see Davey et al. 2018; Masset and White, 2019; Cartwright et al. 2020; Cartwright, 2020).

Another promising area of work on assumptions comes from Jonny Morell. He recently wrote a blog on his book chapter, Assumptions Through a Complexity Lens. Morell includes the following figure specifying conditionals in models. The model at the top depicts the kind of logic people usually use.

The green and red diagrams illustrate that we can include and/or specifications to logic models/theories of change. The green has a reasonable chance of success because there are so many “or” paths (i.e., there are multiple paths to success), whereas the red shows a doomed programme because to achieve the desired outcome all paths must work. Of course, there are also potentially interrelationships and interaction effects between different pathways (and entanglement). But the key point here is to emphasise uncertainty and contingency.

We know that in complex systems many factors combine to produce outcomes. Though not all factors are necessarily equally important. Unfortunately, this is something many complexity theorists seem unwilling to consider, and this leaves sense making far less fruitful in my view. We need to find practical ways not just to describe but analyse what appears to matter, or not. As Morell points out, the way traditional logic models work is that they suggest that all causal relationships (i.e., arrows) are equally important, and therefore we don’t care about whether connections are strong or weak, only that they exist. Morell argues that it’s worth signalling the strength of relationships through thicker or thinner lines in diagrams. He suggests that if a programme theory shows too many weak connections, it may not be worth implementing in the first place.

In some way, and/or options resemble what Jewlya Lynn argued for by including foresight practice in the evaluation toolkit (something I discussed in a blog at the time). According to Lynn, a key weakness of theories of change is that they refer to “one possible, relatively narrow pathway into one possible future.” Lynn explains how she replaced theories of change with scenario maps to get around this issue. Rick Davies has also looked at potential synergies between ParEvo and theories of change. What ParEvo offers is a focus on alternative narratives, and then assessing their desirability and probability.

Alternatively, Dave Snowden recommends a vector theory of change. In this, you forget about long-term goals entirely. You start from where you are and map the system’s current dispositional state, identify a desired direction of travel, but not a final destination. If you have a broad outcome area, or options for different possible outcomes you don’t really need to do this, nor am I confident that forgetting about goals helps. Yet a “no destination” perspective obviously offers greater latitude for potential emergence. If you have no idea where you’re going, it could hardly be otherwise.

Morell, Lynn, Davies, and Snowden all illustrate that there are ways around the one pathway, one future problem.

Failure to adapt, and capture emergence

As Patricia Rogers mentions, one of most challenging aspects for evaluating complex interventions is the notion of emergence. I’m not convinced any complexity theorist or practitioner (whether in evaluation or not) has a good answer for emergence. But let’s take a quick look at what has been attempted.

John Hoven argues that an “evolving theory of change” should be one that is continually revised based on evidence rather than assumptions. For all the reasons I’ve provided above, I think this is wrong. You don’t just need evidence; you need to update your assumptions too. Though Rogers underscores a common view that ‘emergent outcomes may well require an emergent logic model — or in fact one that is expected to continue to evolve.’ This implies more flexible, and possibly looser, theories of change which are then updated periodically over time.

The problem is that too few people actually revise their theories of change in practice. Duncan Green highlighted that none of the seasoned campaigners he had a workshop with on theories of change “could recall a campaign team ever getting the initial theory of change off the virtual shelf and revisiting it.” On one hand, this might stem from the fact that campaigners claim to be the busiest people in the world and perhaps don’t take sufficient time to pause and reflect. A recent survey of US non-profits shows us that only a minority worked with external evaluators and only a minority evaluated with a view to changing what issues they focused on or how they spent resources. When they did, most used after action reviews. Other methods were rarely used. Though, a failure to update theories of change is a wider phenomenon, as a reviews from the Consultative Group on International Agricultural Research (CGIAR) on their use of theories of change also shows us.

In evaluations themselves, we’re increasingly seeing proposals to use theories of change iteratively, and to improve them over time, as the figure below from Marina Apgar et al. illustrates:

Apgar et al. 2020: 3

There remains debate on how good theory-based methods are for evaluating complexity. In general, while there are certainly flaws, I don’t believe that theories of change are irreconcilable with complexity, and nor do many other evaluators (Bamberger et al. 2016; Paz-Ybarnegaray and Douthwaite, 2016; Barbrook-Johnson et al. 2021; Lynn et al. 2022; Douthwaite et al. 2023). I think my main takeaway from reviewing recent work is that we need to be humbler. But there is ever growing awareness, knowledge, and resources to help us make the best of a difficult task.

--

--

Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.