Bricolage and alchemy for evaluation gold

Thomas Aston
3 min readDec 24, 2020

--

Frank Bowling, Moby Dick

Over the course of the blog series, I’ve asked you to break down walls, to question your assumptions, and look more closely at what others are doing. I believe we, the non-randomistas, have more in common than distinguishes us. As there is no gold standard, I believe the best we have is alchemy — that is, only together can we transmute baser metals into gold.

Frank Bowling’s “Moby Dick” (1981) above creates spectacular marbling effects likened to “atmospheric impressions of skies, visions of the cosmos, or alchemical transformations.” He was also the first black artist to be elected a Royal Academician (shockingly, as late as 2005). Perhaps we evaluators shouldn’t be quite so shocked. After all, very nearly everything in evaluation is written in English by white people. It’s a reminder of who controls the discourse (people who look and sound like me). I’m hoping we can begin to change that, and broaden the conversation because I’m a bit tired of echo chambers that reinforce my biases. As I look at Achille Mbembe’s On the Postcolony on my shelf, I want to read things that weren’t first written in English, ideally with metaphors I don’t really understand. I want to read things that don’t necessarily come from one of the many method-based MEL communities of practice, or at least that cross-fertilise between them. The point here is that we can really benefit from diverse perspectives and we can do more together than we can apart. And if we can’t transform the system overnight, then method bricolage (or alchemy) is one way to go.

In less pretentiously medieval terms, Rick Davies response to my first blog was the following:

“Recombination is the main source of creativity within evolutionary processes. Similarly, within evaluation practice, I think that the imaginative combination of methods could be the way to go. Not just different methods for different evaluation questions, but different methods in combination to address a given evaluation question.”

So, let’s take some inspiration from the various experiences I’ve mentioned over the blog series which have had the courage to recombine methods. Go forth and bricolage. And if you’ve read any of the series, please write a blog of your own to critique (as Rick did) or to expand the debate. I received lots of useful input from others over the course of the series and I’ve learned a lot. I’m hoping others will now take the baton to run the next leg.

If you missed the series, here it is in full:

High priests, method police, and why it’s time for a new conversation: Why it is a waste time criticising RCTs, and why we should be focused on improving what our own participatory and theory-based methods can help us do together.

Niches, edifices, and evaluation jargon: Why politics, power, and prestige may have prevented us from having more fruitful conversations about the relative merits of different participatory and theory-based methods.

Miracles, false confessions, and what good evidence looks like: How Process Tracing can help us assess the quality of evidence (see here for a learning paper on 6 Process Tracing evaluations).

Boundaries, relationships and incremental change: How Outcome Mapping and the Actor-based Change Framework can help us think about relationships and what really drives behaviour change.

How to be smarter about narrating behaviour change: The value of outcome statements from Outcome Harvesting.

Whose story are we telling and who are the storytellers? Why we need to refocus attention on people retelling their own stories and listen to their own justifications of significance in our use of Most Significant Change.

Windows on the world: The power of assumptions in uncertain times: Why theories of change are all about assumptions and how to go about taking them more seriously.

Rubrics as a harness for complexity: How rubrics can help us make more effective evaluative judgements from participatory and theory-based methods for MEL (see here for Contribution Rubrics).

The above are all tools, methods or approaches I’ve used personally. You can expect more to come on Process Tracing and Realist Evaluation in a separate series. As I learn more, I may also write something more detailed on the benefits of the Qualitative Impact Protocol (QuIP) and Contribution Analysis. I have also promised Chris Roche to write something further on the politics of assumptions.

--

--

Thomas Aston
Thomas Aston

Written by Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.

No responses yet