How to be smarter about narrating behaviour change

Thomas Aston
6 min readDec 23, 2020
Olafur Eliasson, Your Uncertain Shadow

In a previous blog, I looked at the potential value of Outcome Mapping and the Actor-based Change (ABC) framework. I focused on the importance of being realistic about influence, the importance of the journey as well as the destination, why relationships between and among actors matter, and I reflected on what really drives behaviour change. In this blog, I will focus on how to narrate behaviour change through Outcome Harvesting.

Outcome statements

For me, the best thing about Outcome Harvesting is outcome statements. In a nutshell, you describe an actor’s behaviour change, explain why it’s significant (or important) in a particular context, and describe your contribution (the plausible links between the change and your actions). Outcomes statements help you focus on the who, what, when, and whereof change. While this seems obvious, we rarely include all this information when articulating outcomes (behaviour changes).

Below you can see an example for how to structure this information:

In 2008 (when), the UN Peacebuilding Commission (PBC) (who) strengthened the language (what) in its semi-annual review of peacebuilding in Burundi (where) regarding the importance of accountability and human rights training for the security services, reflecting civil society concerns about human rights abuses in 2007–2008 (Scheers, n.d.).

This seemingly simple exercise of pinning down what happened and how it happenedreveals a great deal about your gaps. You should write these statements in a reasonable amount of detail and stick to one what per outcome statement so you don’t end up looking at multiple changes at once (which you need to untangle later). Ideally, someone who doesn’t know the initiative or have any knowledge of the context should be able to understand who changed what, when, and where it changed, and how your intervention contributed to that change (see Wilson-Grau and Britt, 2013: 10). Also, see this useful new video on how to formulate an outcome statement.

I’m convinced this is useful whatever other methods you’re using. For example, Process Tracing (and Contribution Tracing) could use this format to help construct contribution claims and Realist Evaluation could use outcome statements to help distinguish between Context, Mechanism, and Outcome (CMO). In Contribution Analysis, outcome statements could help you develop the specific questions to address. If your question is “did the development intervention influence a change, or did the intervention make an important contribution to a change?” you first need to define what that change actually is. Given that in each of these methods you start with defining the change, drafting outcome statements is probably the first thing you should do in an ex post theory-based evaluation.

The value of outcome statements becomes clearer still when you’re able to pin down the change further. Even though it’s not always possible (or even desirable) to set SMART targets, making outcome statements SMART (Specific Measurable Achieved Relevant and Timely) can make a huge difference. In particular, when you make your outcome statements specific, it can help substantiate (or refute) your claim of influence. Using active verbs (e.g. used, promoted, published), rather than passive verbs, helps. Ultimately, if you’re vague about your outcome, if it’s difficult to verify, if it’s not clear there’s a plausible link between your actions and outcomes, or if the outcome isn’t relevant to the impact you seek, then it’s not worth assessing anyway.

We can dig a little deeper here on how other methods can benefit from being SMARTer. Process Tracing requires outcomes to be precisely expressed. Outcome statements help achieve this because they require outcomes and actors’ contributions to be precisely formulated. One example of this can be found in the Chukua Hatua effectiveness review for Oxfam. Likewise, this can work both ways. Outcome statements can be further strengthened by both Process Tracing and Contribution Analysis which can help us better understand the chronology of events, the connections between programme outcomes in a single or many causal chains (see this evaluation for how to combine Outcome Harvesting and Contribution Analysis). Getting specific about chronology allowed me to thread a cluster of significant outcomes together recently, and thus showed how outcome statements can be nested within Process Tracing.

I’ve found that writing SMART outcomes is easier to do in English than Spanish or French. This is down to differences in syntax and the use of the passive voice in Latinate languages. We also remember different things in different languages. Relatedly, we recall attribution differently. For example, whereas in English, we typically say that someone broke a vase even if it was an accident, but Spanish and Japanese speakers tend to say that the vase broke itself. Our sense of time and spatial awareness also varies by language. So, it’s worth bearing in mind that it may take more time to coach teams in languages other than English, and speakers of different languages will be better (or worse) at explaining different things.

As Richard Smith reminded me recently, getting SMART-enough outcome statements is largely about coaching teams describing their outcomes to help them turn vague (and passive) statements about change and their role in it into something concrete and observable. Pushing sources to be specific and plausible clarifies their thinking and results in less concrete notions of possible changes being excluded. So, coaching helps refine a team’s critical thinking about what they have achieved (or not).

One part of this coaching, of course, is helping teams to determine an appropriate number of outcomes that are worth writing up and substantiating. There is no correct answer to this question. However, each outcome statement and supporting material typically constitutes a few paragraphs. Coaching will often comprise two or three rounds of feedback (3 email exchanges, Skype calls or face-to-face meetings) for each outcome. So, bear that in mind when you assess the significance of each outcome. I’d say any outcome you believe is highly significant is worth the time, otherwise (logically) it wasn’t significant enough.

Adaptations

There are various other methods that are reconcilable with outcome statements. I take a very pick-and-mix approach to methods. This isn’t simply laziness or arrogance, it’s because I believe we can leverage some of the best of different methods to make them collectively stronger.

Beyond those I mention above, I did an adapted fuzzy set Qualitative Comparative Analysis (fsQCA) based on an adaptation of outcome statements in measuring the impact of advocacy for CARE. This included an assessment of the significance of the advocacy success, the level of contribution and the quality of evidence underpinning contribution claims. Evidence quality was assessed using Process Tracing evidence tests. You can come up with your own criteria for significance (e.g. relevance, scale, newsworthiness, sustainability), as there’s no one correct way to do this. Also levels of contribution can be easily converted into qualitative scales (or rubrics) to help compare levels of outcomes (as you can see in Contribution Rubrics).

Root Change and Chemonics have also recently adapted Outcome Harvesting in the Strengthening Advocacy and Civic Engagement (SACE) Programme in Nigeria. While I had a spreadsheet, SACE adapted a matrix, combining Outcome Harvesting with Coffman and Beer’s (2015) Advocacy Strategy Framework to help organise a range of outcomes, as you can see below:

The matrix served dual roles as both planning framework and outcome tracker. Root Change and Chemonics argue that visualising stories like this (with outcomes and contributions mapped out) allowed teams to track how changes moved from awareness, to commitment, to action, but also the connections between different contributions and outcomes. So, it allowed partners to celebrate their own outcomes but also identify gaps in their advocacy strategies and point to areas where the groups (clusters) needed to diversify their membership or seek new allies who would complement existing priorities. Teams also created “journey maps” to better understand the reasons for strategic pivots, the most important strategies and credibility of claims for contribution towards outcomes. This is quite similar to a RAPID Outcome Assessment map. Contribution ratings were also used with minimal rubrics (none, minimal, moderate, and substantial) as well as significance ratings (with an impact scale), and assessment of the strength of evidence. This thus fits well with both Process Tracing and Contribution Analysis adaptations mentioned above, as well as evaluation rubrics, which I’ll discuss in a future blog.

Saferworld, Save the Children and Clear Horizon have all made similar efforts to adapt aspects of Outcome Harvesting related to significance and contribution ratings. So, hopefully, there’s a conversation about how these ratings, and outcome statements more generally, can be nested within other methods.

In the next blog, I’ve decided it’s time we revisited Most Significant Change (MSC), so I’ll probably look at what “significant” means and how the views of different populations should be taken into account.

My thanks to Richard Smith, Kaia Ambrose, and Ximena Echeverría for helpful comments.

--

--

Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.