Helen Frankenthaler, Riverhead

How to think differently about the value of interventions

Thomas Aston
8 min readNov 17, 2024

--

With Florencia Guerzovich

In a previous blog, we discussed some general limitations of the “smart buys.” In this blog, we’ll take a look at why a different approach to assessing value is necessary for complex programming such as social accountability.

Accountability is about relationships — it’s relational. So, in accountability interventions, relationships aren’t a side issue, they’re central. In the last blog, we looked at the “smart buys” in the education sector. Visit any school and what you’ll find is that most people (principals, teachers, Parent-Teacher Associations — PTAs), spend a substantial proportion of their time building and maintaining relationships to address problems collectively. For instance, when we visited the Dominican Republic for an ex-post evaluation of World Vision’s Read and Community Participation in How is My School Doing projects, principals and teachers were repeatedly concerned with finding ways to recruit parents to support school activities.

Relationships aren’t something you deliver like providing information on the value of education or a training on lesson plans, so they don’t neatly fit into the “smart buys” framework. “Relationships” are virtually invisible in the smart buys — the word is only mentioned once in 68 pages. By comparison, there are 51 mentions of tests and 45 mentions of (randomized control) trials. Relationships are connections and processes that ebb and flow throughout the implementation of an intervention, and as Mongolia and Dominican officials reminded us, they can often make or break them.

When interventions are conceptualised relationally rather than as merely a set of technical tools and methods that fix incentive problems (the ill-fitting analogy of plumbers fixing broken pipes), our outlook changes. In the case of community engagement in school management, the Panel noted that interventions may be most effective where they are less needed, essentially because of the quality of relationships. In those places, the power asymmetries between school authorities and parents are small and there are well functioning sources of school accountability. However, this is a speculation not backed up by evidence. It ignores that these interventions themselves may have the potential to change relationships, tackle power asymmetries and help rework roles and responsibilities (i.e. social contracts) in schools, as World Vision saw in Indonesia and DRC. This is also what the smart buys Panel’s own India example attempted, and failed, to achieve. Clearly a greater focus on relationships, and power dynamics is merited.

How people implement is as important as what development partners, experts, and others plan to implement. And yet, the Panel assumes that implementation fit with these relational dynamics is a secondary concern. They argue that ‘implementation fidelity is a critical element of program impacts.’ This is a “best practice” replication mindset which assumes there is indeed a “silver bullet” out there somewhere that can be easily copied and pasted elsewhere. This risks missing the wood from the trees and the connection between the two. For instance, as the Panel clearly identifies individuals as the primary units of analysis (each students’ test, each teachers’ competences), individuals’ social nature, their relationships and related social norms (not mentioned) and institutions (mentioned) are conceived as background — rather than systems factors that call for adaptation over fidelity. Little room is afforded to the complexities of implementer discretion or to the “craft” required to meet parents, teachers, and other actors where they are, either. That place is one where change is drawn out, partial and non-linear, and often the result of what happens during implementation, where multiple types of relationships between people in the education system contribute to outcomes.

Local actors across the Dominican education system are sensitive to implementation realities. Unlike “smart buys” proponents, they have a much needed system lens. For example, they often work within the same rules and policies, rather than changing them. Their well-placed actions can create room (or leverage points) and tap into the rules to insert important adjustments. Facilitators of social accountability projects, much like teachers, that work with urban parents who can only go to the Parent-Teacher Association (PTA) meeting after finishing what might be their second or third job, have different constraints to those who work with the parents who might be around the walking distance from the rural village school. A tweak makes a difference on whether the PTA mandated in law actually meets. A district or regional supervisor, but the one that finds creative ways to motivate the PTA, can make a different contribution to how the law works than the one who doesn’t show up to the school. And often, these and other actors exercise their agency to unlock resources that school based management bodies should be getting on their own.

The failure to engage with evidence on systems and politics

The “smart buys” cops-out on systems and complexity. The authors implicitly seem to understand that the delivery of quality education is a complex endeavour, even if they only mention “complex” once — they have an annex on the importance of systematic reform. They would do well to review USAID’s own Local Systems Position Paper. The report acknowledges that the specific, or siloed interventions that it focuses on are not all that matters, including for sustainable results. It explicitly states that “ensuring learning for all children and youth requires an education system that is coherent and aligned toward learning.” But then, it states that this kind of reform effort, which includes looking beyond intervention siloes and considering their interactions, is “hard to evaluate rigorously (i.e., with RCTs).”

The report does not claim that systems change is unimportant. It merely suggests that this is extra homework for policymakers because a host of relevant and valuable evidence is deliberately excluded because they aren’t experiments.

The “smart buys” report authors don’t consider systems an important topic because their preferred technique to learn about “what works” for quality education (RCTs) does not apply well for such changes. The smart buys intentional decision is a problem because “uptake starts with research [or evaluation] design.”

We see the same kind of narrow-mindedness on display here below:

Chris is totally right here. Look beyond RCTs, and there is rigorous qualitative research about “messy” political systems strengthening processes despite ubiquitous lack of political will among heads of government and weak education ministries, from Latin America to Southern Africa to Vietnam and beyond. This research suggests that merely hoping for a head of state to descend from the heavens and commit politically can be an unproductive pretext.

Our ex-post evaluation in the Dominican Republic spotlights how short-term projects can feature within these broader reform processes. A set of local actors, in this case a loose policy network, used these projects and their own positioning to improve how participatory school based management works in practice. An important part of what made things work was their ability to build positive interactions among short-term projects. In schools, World Vision didn’t have to start from scratch every time — which is particularly important when implementers need to create in school the relationships that enable actors to develop a new, shared sense of possibility. In some cases, World Vision had been in the schools or communities for years. So, the baseline wasn’t when the second project started, or even the first, in several cases.

At policy level, they gradually filled regulatory gaps with insights thanks to feedback from implementation. Another key element was the project team’s ability to mitigate the risks and harms produced by other interventions, including a researcher-led RCT by the World Bank which aligns neatly with smart buys authors’ biases, which according to numerous schools we visited damaged communities’ trust in donor-funded projects. We agree with the Panel that ‘how a given program interacts with other interventions’ is a crucial part of context and important to study. Unfortunately, very few research initiatives or evaluations actually do this, and certaintly, no RCTs do.

A Revamped Global Evidence Base for Education

Given these blindspots, the “smart buys” doesn’t usefully help donors figure out how to invest their money. It’s not limited, it’s intrinsically flawed. It effectively ignores areas that are tricky to research (by their preferred methods), even though they are vital to actually spur education outcomes. So, many of its recommendations are poorly evidenced and outright misleading, except in a narrow set of circumstances. To be charitable, as mentioned above, the report doesn’t say that systems investments are not important and donors should only allocate funding to a discrete set of great and good buys, but they mostly exclude evidence on this due to their methodological biases. Evidence-based investments in systems can support the local actors who deliver quality education. So, what should donors do?

  • Donors should rebalance their investments in global evidence towards relationships, politics, and systems. Ben Ross Schneider argues the field has over-invested in narrow experimental educational research and Faul and Savage call for education systems research. It’s still unclear that new investments in implementation research will help elude the same problems if it does not aim to provide evidence for people embedded in relations for whom contextualisation and adaptation are part and parcel of the work.
  • Donors should focus on the evidence that decision-makers and implementers need (i.e., an evidence base that considers their circumstances and rewards using it). Not doing this consultation is a serious red flag. As Michael Quinn Patton’s utilisation-focused evaluation and others suggest, evaluation should be judged on its usefulness to key stakeholders and meet them where they are.
  • Donors should invest in generative evidence focused on surfacing, systematizing and learning about systems practice. Rigorous case study causal research and evaluations can help surface insights about how practitioners on the ground already think and work in systems. It’s possible to learn about how projects are doing or did in contributing to relational dynamics, looking at the interactions between interventions, or contributing to systems strength.
  • Donors should invest in monitoring, evaluation, learning and storytelling over time. Quality evaluation takes longer than a project or an electoral cycle, but donors don’t need to throw the baby with the bathwater. The ex-post evaluation in the Dominican Republic illustrates that donors don’t need to leave decision-makers to their own devices. A more responsive research and evaluation agenda can provide answers to this ubiquitous challenge. Donors could use vehicles such as innovation funds for ex-post evaluation of a country’s trajectory, a library of demonstration cases, or other mechanisms for incentivizing and enabling the production and use of evidence that can be embedded in strategies, theories of change and log-frames, capacity building efforts, contracting and budgets.
  • Funding for systematic comparative analysis across well selected cases may help decision-makers understand the conditions under which lessons from one context may travel to another. We have good reasons to suspect that investments in real-time and ex-post evaluation focused on social accountability for school based management interventions in Moldova, Mongolia, among others that share similar political economy conditions with the Dominican Republic, could be a smart investment. Comparisons with countries that do not share the same background conditions could also be valuable to better understand where NOT to extrapolate insights.
  • Donors might also invest in other forms of convening to foster exchanges among staff and practitioners navigating similar types of systems. These convenings are usually useful to surface tacit knowledge and create incentives, in their own terms. They can be helpful to shape utilisation-focused research agendas that by design focus on meaningful sweet spots: neither lessons should always be generalised; nor should they always be ignored. Donor should and can be proactive in moving away from methodological and ideologically driven cacophonies.
  • Donors should set aside “smart buys” to only be used in short-term mitigation efforts, where it might conceivably be useful. They should set incentives for their partners accordingly, and take the risks associated with short-term smart buys mindset seriously.

--

--

Thomas Aston
Thomas Aston

Written by Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.

Responses (29)