Figuring out “what is the work?” we actually do sounds like a simple task, but in fact, it’s surprisingly difficult.
David Jacobstein wrestled with this question about 18 months ago, enquiring into how much to confront donors with the reality of what their work is, versus continued acceptance of the terms on which they receive funding. I’ve also been reflecting on what “counts” as social accountability recently, and reading Brendan Halloran’s piece on the evolution of “accountability ecosystems” as a concept has prompted me to share a few thoughts of my own.
Perhaps the most common definition of “social accountability,”…
I finally got the chance to read the Accountability Research Center’s (ARC) recent study on “bottom up accountability” in Uganda. It’s a long read, but worth it because it offers some insights into a “black box of implementation.”
I’ve discussed opening black boxes before, but I wanted to offer a few thoughts on what “opening the black box” really entails in the accountability debate, and what implications this may have for practitioners and donors.
The takeaway I want to emphasise here is that what matters is not just what and how much you do, but when you do it, where…
While there’s little doubt theories of change have been frequently misused and abused, I argued that when we use theories of change to critically examine our hypotheses and assumptions, they remain very useful. I suggested that they can be most useful when we: 1) set clear boundaries, 2) are problem-driven, 3) are evidence-based, 4) are explicit about testing our assumptions, and 5) review key areas of focus regularly.
As Thomas Dunmore Rodriguez explains:
“The end result can often remain a fairly sketchy story of change, with lots of untested assumptions.”
Dunmore Rodriguez recalls a complex theory of change mapped across the concentric circles of a socio-ecological model, illustrated below:
Why do we so often end up with a fairly sketchy story of change with lots of untested assumptions?
In my view, despite the very…
In recent years, international development programmes have increasingly sought to address complex problems, and this has led to growing demand for “complexity-aware” approaches to monitoring and evaluation. One key area of this frontier is the family of approaches known as theory-based evaluation.
In this webinar hosted by the Centre for Development Impact (CDI), I argue that there is a need for combining theory-based methods to improve evaluation practice and shed light on causal mechanisms.
I draw on on recent lessons on comparative learning from six process tracing evaluations conducted between 2017 and 2020. I explain how realist evaluation can strengthen…
In February, Derek Thorne wrote an article on how we define accountability, and he cautioned us to be careful about what we call accountability. This triggered a Twitter exchange in which Nathaniel Heller argued “we should abandon the label ‘accountability’ entirely,” as “it’s misleading, empty, and of little use to practitioners.” Nathaniel offered: “punishment” and “answerability” as alternatives. On the other hand, Tiago Peixoto offered “sanction” and “responsiveness.” Jonathan Fox harked back to Andreas Schedler’s definition of “answerability” and “enforcement (application of sanctions).” …
When looking at complex change systems, it’s not only important to reflect on actions and results. As Chris Roche put it to me recently, teams also need to periodically reflect on their assumptions about how change happens, and their identity and values, what some call triple loop learning.
Theories of change aren’t just a tick box log frames on steroids. They are about the process of achieving a shared understanding and about reflecting on our assumptions. Learning is process of reflection and gaining (new) knowledge and understanding. We’ve all seen this, right?
I'm an independent consultant specialising in theory-based and participatory evaluation methods.