Hurvin Anderson, Ball Watching IV

What, so what, now what?

Thomas Aston
8 min readMar 18, 2022

--

Getting serious about systems change

In a widely read blog by systems guru Rob Ricigliano a couple of years ago on getting serious about how the systems change, I spotted a sub-section which I think deserves greater discussion. Ricigliano thoughtfully points out that “many of us have evangelized the need for systems change while understating the difficulty of the challenge.” The call for systems change is in every left-wing podcast I watch, pretty well every global campaign I see, most INGO strategies, and most evaluation conferences.

Ricigliano makes sensible recommendations such as don’t overpromise on systems change. He reminds us that shifting systems is also about shifting ourselves, that you need the right fit approach for your context, and to ensure that you’re prepared to learn from both success and failure. By these criteria, most of us aren’t very serious about systems change.

Yet, what really caught my eye in Ricigliano’s blog was the following:

Systems and complexity MEL is more about the process of constantly sensing the impacts of your work (the what), making sense of them (the so what), and learning what they tell you about how the system operates and how to engage it effectively (the now what).

Another unfortunate trend (particularly in the systems change business) is efforts to make things as complicated, inaccessible, and unresolvable as possible. I’d by lying if I said I truly understood what “symmathesy” or “entangled trios” are. I’m not entirely sure whether there are (or aren’t) “root causes in complexity.” While these are philosophically interesting for someone with a philosophy degree (like me), they aren’t that helpful practically to assess contributions to systems change.

However, something as ostensibly straightforward as the what, so what, now what process probably can help us practically because it communicates something about the basic questions most of us ask (or should ask) most of the time.

What, so what, now what

The what, so what, now what process is attributed to Henri Lipmanowicz and Keith McCandless, the creators of so-called “Liberating Structures.” I recoil at the disruptive business school terminology, but bear with me.

The process has been refashioned into a reflective model, an experiential reflection, a step in a sprint review, and even a proposed structure for a Data Visualization Sketchbook. My colleague John Skelton told me about Mercy Corps’ Prospectus programme in Liberia he worked on, which appears to be one of few efforts in the international development sector to harness this simple and intuitive structure (we’ve used it since elsewhere). Lipmanowicz and McCandless express it like this:

“WHAT? What happened? What did you notice, what facts or observations stood out?” Then, after all the salient observations have been collected, ask, “SO WHAT? Why is that important? What patterns or conclusions are emerging? What hypotheses can you make?” Then, after the sense making is over, ask, “NOW WHAT? What actions make sense?”

This fits into what they call a latter of inference, shown below:

Lipmanowicz and McCandless, n.d.

While I don’t agree with the logic of the whole ladder (originally from Chris Argyris), the links from data to conclusions, to actions seems highly intuitive. We observe things, we draw conclusions from what we observe, and we take actions based on those conclusions. How we make sense of the data does, of course, relate to the meanings ascribed to the data, assumptions we hold in relation to the data, and beliefs we hold about the data. But we aren’t blank slates — assumptions and beliefs are formed prior to the data (as I’ve explained previously).

Getting stuck at WHAT

We tend to get stuck at the WHAT — describing the problem, what happened, or what we did.

Political Economy Analysis (PEA) emerged, at least in part, because many development programmes were pretty bad at analysing context. There are plethora of guidance notes, beginner’s guides, applied guides, applied problem-driven guides, even gendered problem-driven guides. Most donor agencies have used a political economy analysis (PEA) approach, and yet this has been largely ineffective. Thinking politically is one thing, working politically is quite another. And even some of the most politically smart programmes out there still struggle with this.

PEA has been described as the “dismal science of constraints” because it had (and still has) a habit of 2020 hindsight revelations, of focusing too much on the big picture and losing sight of what your intervention can actually do in response to that structural and institutional context. Problem-driven Iterative Adaptation (PDIA) and its set of recommended tools potentially take us some way to moving beyond the dismal science. But, I fear that too many organisations use the lexicon to denote their membership of an enlightened club, rather than actually ensuring that their analysis is actually problem-driven, that this is necessarily iterative, or that they actually have much thoughtful learning or adaptation.

Alternatively, in project learning sessions, it’s very common for participants to spend far too much time describing what they DID, leaving precious little time to reflect on what was actually achieved (or not) and what the relevant implications might be. If you only have an hour (or 45 mins in Lipmanowicz and McCandless exercise), how much time should you really spend simply describing activities?

This is also true of reports, of course. I can recall many occasions where teams would write long essays about all the many things they were doing to justify their activities (and salary), rather than explaining what they were learning about what worked, and what didn't. I still remember suggesting to one Latin American programme manager a decade ago, rather than a 30 page report (which we didn’t need for the donor anyway), a box with: 1) what you think you’re achieving and why, 2) what you’re not achieving and why not, and 3) how we in central office might help (or not), would be more useful. I was politely ignored. I had rather missed the point of what they wanted to share and why they were sharing a long essay in the first place.

The hidden HOW and the missing WHY

While my efforts were in vain, you can probably see what I was getting at: don’t just describe what you did, think about how and why you believe you are influencing behaviour change (or not), and then think about what we can collectively do about it.

When you ask “what happened?” a key part of this is actually HOW the what happened, or the journey. As the aforementioned reports were in Spanish, they were often full of statements without a clear subject — “a dialogue platform was established.” But, we also want to know who did what and when they did it? How is essentially about the way in which something happens. So, by answering a how question, you inevitably have to talk about the who and the when. It’s an invitation to open the so-called “black box.”

WHY is the second part of opening a black box. In this Centre for Development Impact (CDI) webinar, I explained how different methods are geared towards answering different questions. For example, Process Tracing is chiefly focused on understanding the HOW whereas Realist Evaluation is more focused on the WHY. Ultimately, we need to look at both.

The WHY is probably missing from Lipmanowicz and McCandless’ what, so what, no what format, because they have another exercise called the nine whys. This asks: “what do you do when working on [insert subject]?” It asks for a list of activities and then asked “why is this important to you?” up to nine times. So, this is asking what we value and therefore what we should do, rather than explaining how and why change happened.

Alternatively, PDIA uses the 5 whys technique as part of problem analysis. It essentially asks: why is this or that problem taking place? And it does this five times, as the worksheet below illustrates:

Serrat, 2009

Yet, this is also foward-looking. In monitoring and evaluation, sensing the impacts of your work is looking backwards at what was achieved or not.

Beyond merely describing an achievement (or failure), it’s also worth asking (perhaps several times) WHY you think that there was success or failure? And, of course, what other actors might have contributed to that success or failure?

If you jump straight from the WHAT to the SO WHAT you miss the most important and interesting discussion.

SO WHAT: Why implications matter

When I think of the SO WHAT I have memories of a PhD upgrading seminar where one professor asked one of my peers exactly this question. After all, for a PhD, you’re supposed to contribute something new. Your “contribution to knowledge” is a big part of that SO WHAT.

But, ultimately, this is about cut through — why would we think differently or approach a problem differently with this newfound understanding from these data and this analysis.

Problem-driven PEA is supposed to handle this in some measure, but it often gets stuck on the problem and/or describing the system.

In the Partnership to Engage, Reform and Learn (PERL) programme, we had quarterly PEAs. And the PEA tracker (an Excel matrix) had a useful question around the implications of findings from the analysis on particular actors and problems. This is certainly a good explicit and practical question to ask, and it’s what ought to connect analysis to action.

Yet, reading through many of the PEA reports and monitoring reports each quarter, there often wasn’t much new (salient or significant) information. I believe this is true for most quarterly reporting, and this is why quarterly reporting tends to be a pretty trivial exercise of describing activities and outputs without much thought on programmatic strategy or identity.

As Alina Rocha Menocal and I discussed, we need to create space for deeper, more meaningful, and more strategic learning and reflection.

I’ve argued previously that a lot of what comes out in such reporting is single loop learning (WHAT we’re doing and fixing day-to-day operational problems) rather than double or triple loop learning, which reflects on deeper questions of HOW and WHY, and deeper assumptions about the right course of action to address specific problems. These get us closer to understanding patterns, meanings, and drawing inferences about whether our strategy is fit for purpose or not.

NOW WHAT

Then finally, we have to think about what actions make sense in context. What do the patterns tell us? Do some things work in some places and not in others? Do we therefore need to adapt in some places but not in others? Or do we need a complete rethink?

In many respects, the what, so what, now what structure is deceptively simple. Some of the simplest questions are actually the most difficult to answer. Lipmanowicz and McCandless’ exercise itself could also do with something more explicit on the HOW and WHY. We should always be a touch wary of step by step exercises and tailor these to our specific needs. But, I think going back to WHAT, HOW, WHY, SO WHAT, and NOW WHAT can help remind us of the fundamentals, and hopefully prompt deeper, more meaningful, and more strategic learning and reflection on systems change.

--

--

Thomas Aston
Thomas Aston

Written by Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.

Responses (4)