John Martin, The Great Day of His Wrath

Sins of commission: The five most common mistakes when hiring an evaluator… and how to avoid them

Thomas Aston
6 min readJan 7, 2024

--

Written with Matthew Carr and Johanna Morariu

When you’re seeking out an evaluation partner, finding the right match is a challenge for both parties — the commissioner and the evaluator. At times, we make it even harder when we haven’t asked ourselves the right questions in advance. In the spirit of helping our fellow evaluation commissioners and practitioners avoid a few of these traps, here are the five most common mistakes we’ve seen (and occasionally committed):

1) Trying to answer too many questions

There’s nothing worse as an evaluator than finding that the scope of work (SoW) you’re responding to has dozens of questions — often exacerbated by also being speculative and potentially unanswerable questions. Even with a robust budget, asking too many questions is setting you up for failure. At best, it entails an awkward conversation with potential evaluators about what they can reasonably accomplish.

How to avoid the trap — ask yourself: “What’s the main thing I’m trying to evaluate?”

Commissioners can avoid this trap by choosing a manageable number of areas an evaluation can focus on before publishing their SoW (e.g., what do you most want to learn or demonstrate?). The UK Evaluation Society, for example, proposes the principle of ‘realistic expectations of what an evaluation can and cannot provide.’ And relatedly, Eval Academy suggests that 5–7 questions is usually sufficient. Rarely is more than that a good idea. Planning an evaluation often involves difficult decisions about trade-offs. So, try to have at least some of the hard conversations about trade-offs internally to avoid sending mixed signals to your potential evaluation partner.

2) Trying to do an evaluation too soon

The realization that the intervention, project, program, or grant portfolio being commissioned for evaluation isn’t really evaluable, or evaluation isn’t feasible (the World Bank has a feasibility checklist) is another potential nightmare. It’s not uncommon, for instance, for evaluations to be premature. Perhaps the intervention hasn’t been fully implemented yet, and thus there’s nothing to evaluate. For instance, in causal methods such as process tracing, you need an outcome from which to trace (i.e., you need a dead body to investigate a murder). If all you have is outputs, you may be wasting time and money. Alternatively, you might be asking for an “impact evaluation” when you’d be better off conducting a formative process evaluation.

How to avoid the trap — ask yourself: “Are we really ready for the evaluation we’re asking for?”

In order to avoid this mistake, commissioners should do an internal evaluability assessment before releasing their Request for Proposals (RFP) to determine the extent to which an intervention can be evaluated in a reliable and credible fashion. At the very least you should have some evidence in hand that project activities have been implemented, or that an outcome has had enough time to occur (or should have occurred by then) before launching into an impact evaluation.

3) Taking a methods first approach, or not having any methodological guidance at all

There’s something of a “Goldilocks and the three bears” problem when commissioning evaluations. Commissioners are often open to the method or methods which evaluators believe are most appropriate to respond to the SoW. After all, they assume that evaluators know what the most appropriate methods for the task at hand are and have the skills to use them effectively. On the other hand, some commissioners stipulate a specific evaluation method they believe is appropriate for the task at hand.

There are risks to both ends of the spectrum. A blank canvas makes it hard for commissioners, who may not know that many methods, to assess whether evaluators really know the methods they’re proposing to use. And specifying a specific, single, method has the law of the instrument problem, whereby if you have a hammer all you see is nails.

How to avoid the trap — ask yourself “Why choose these methods, and not others?”

In addition to making sure that the proposed methods answer the priority questions you have (see the design triangle), you can ask evaluators to demonstrate where they may have used the proposed methods before. And you can ask evaluators to explain why this method (or these methods) verses alternatives. Or indeed, explain how methods (or parts of methods) fit together to become more than the sum of the parts (e.g., consider bricolage). This helps commissioners scrutinise chosen methods on their merits.

4) Assuming that newer approaches must be better

Evaluation, like any other discipline, is susceptible to trends and fads. Evaluation commissioners, especially in smaller organizations, can’t be expected to have the latest articles in evaluation journals, or know what the main pros and cons are of dozens of evaluation methods. Even though there are some useful guides such as the Magenta Book and its annexes which helpfully list pros and cons, it can still be difficult to know what’s the best fit for this evaluation.

Evaluators also have substantial incentives to seek out brand new things — approaches, methods, tools, and to use our evaluation projects to road test them (we are no exception). In recent years, certain methods or approaches took off. We’ve all seen the precipitous rise of Randomized Control Trials (RCTs) leading to their use in contexts where it’s demonstrably not the best approach, and we’ve seen adaptations of these (e.g., factorial designs and adaptive trials) which are also used for the wrong purposes. Developmental evaluation and Bayesian process tracing are also two prominent examples on the other end of the evaluation spectrum. All of these can be very useful and appropriate under certain conditions and for certain purposes. But none are magic bullets; no methods are.

How to avoid the trap — ask yourself: “Am I being influenced by novelty bias?”

Commissioners should display a healthy dose of scepticism. Consider whether what is being proposed truly fits the purpose of the evaluation you want to implement. When an evaluation tells you what they are going to do is “rigorous,” ask them to explain why what they propose will add rigour to this evaluation, rather than that it is “rigorous” in general. Whether approaches, methods, or tools are any good or not is all about whether they’re appropriate for the task at hand.

5) Not creating enough time or space for meaningful inclusion of community perspectives

We want evaluation results to be timely in the hopes that the findings will be actionable and inform organizational decision-making. But this sense of urgency can also lead evaluation commissioners to short-change critical practices around inclusion and participation that often require a project to move more slowly or take longer to complete.

How to avoid the trap — ask yourself: “Are we trying to move too fast, and at what cost?”

Commissioners should understand and explicitly assess the trade-offs that are an inherent part of any evaluation project. We’ve been convinced by the notion that rigor can be inclusive (i.e., communities can themselves play an important role in narrating and explaining pathways to change). But you have to create time and space to elicit those valuable perspectives. So, it can be helpful to move a step slower to bring those perspectives in to tell a fuller and more meaningful story of change.

Conclusion

Commissioning evaluations is fraught with numerous challenges. These were simply five we’ve seen more often than many others. There are several more that we could have mentioned such as wating too long to start thinking about an evaluation, not having a clear audience in mind from the beginning, or not having a dissemination plan for how you’ll get the results to those who need them most.

Overall, it’s important to keep your eyes open and continue to question your assumptions at the beginning, middle, and end of an evaluation journey. We believe these five tips go a long way to avoiding the sins of commission:

(1) focus your attention by asking fewer but more strategic questions;

(2) ensure you’re truly ready for evaluation through an evaluability assessment;

(3) make sure the methods are fit for purpose;

(4) take a deep breath before trying out the shiny new thing, and;

(5) create enough space and time for community perspectives to tell the full story.

--

--

Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.