“Real”​ process tracing: part 6 — interviews

Thomas Aston
12 min readDec 29, 2020

--

Michelangelo Merisi da Caravaggio, The Cardsharps

In this final blog in the series, I aim to exhaust the possibilities of integration between Process Tracing (PT) and Realist Evaluation (RE). With this brief, I will turn to the part of RE I struggle with the most — interviews.

When I was trained on RE, I remember we spent at least an hour critiquing research papers for not being “real” enough. One participant was told that if she had already collected interview data, she should only call her study “realist-informed” rather than “realist.” This was deemed to be a diminished calibre of data collection for the purposes of journal publication. She was (quite reasonably) frustrated by this news.

As most of those in the room were either PhD students or academics, I understand the journal publication appeal. However, I found it curious that somehow your status would be diminished if you didn’t conduct interviews through a realist lens. In The Craft of Interviewing in Realist Evaluation Ana Manzano (2016: 4) notes that ‘qualitative interviewing is the most common method of data collection in realist evaluations.’

By the time of my training, I’d read most of the realist cannon and employed various aspects of the RE in practice, but the specific value-added of realist interviewing had somehow passed me by. So, what was all the fuss about?

In a nutshell, Ray Pawson sums it up as an approach in which:

I’ll show you my theory, if you’ll show me yours.

For Pawson, the evaluator’s theory is the ‘subject of the interview, and the interviewee is [only] there to confirm or falsify and above all refine that theory (Pawson, 1996: 299; Pawson and Tilly, 1997: 155).’ We’re not considering whether we have a ‘true representation of a subject’s attitudes… [or] faithful representation of their beliefs’ but only ‘aspects of the subject’s understanding which are relevant to the researcher’s [CMO] theory (Pawson and Tilley, 1997: 164).’

Pawson suggests we can do this through the “teacher–learner cycle.” He argues that ‘understanding of contexts and outcomes should be led by the researcher’s conceptualisation’ and the ‘conceptual distinctions involved should be derived from the researcher’s theory and these meanings should be made clear to the respondent in the getting of information (Pawson, 1996: 303, 305–306).’ The ‘teacher-learner cycle usually involves teaching the interviewee the particular programme theory under test (Greenhalgh et al. 2017: 1).’

In principle, you’re asking interviewees to theorise the programme with you, allowing them to think, and the (well-informed) respondent should then be able to teach the evaluator about those components of a programme.

The reason why we’re encouraged to take this approach is because some realists claim that unless you reveal your theory to the interviewee, you can’t reach sufficient “ontological depth” (and reveal underlying mechanisms). As per Lemire et al’s recent review, contrary to the original definition of “resources” and “reasoning,” the most common conceptualisation of mechanisms in RE is as actually as activities or actions taken in the programme. So, there do appear to be legitimate grounds for the “method police” to intervene.

Ana Manzano (2016) offers some helpful guiding principles for how to conduct realist interviews. She proposes interviewing in three phases. First, interview practitioners who know the programme well (for theory gleaning). Then interview frontline practitioners (theory refinement). And finally, interview service users (theory consolidation). Interview sampling should also be based on CMO investigation potential (Pawson and Tilley, 1997). Manzano thus suggests that you need to collect enough data to shed light on your proposed theory, so this should be a large amount of data, and data collection should be iterative (i.e. repeat interviews). There is, however, a debate as to how many interviews is enough, as I explored in a previous blog on sampling.

Manzano criticises traditional advice to pretend incompetency to avoid data contamination and to act deliberately naive (Kvale and Brinkmann, 2009) and also guidance which suggests interview standardisation and consistency. She recommends tailoring interviews, as different interviewees will shed light on different aspects of contexts, mechanisms, or outcomes. This might mean changing the words, word order, or even the whole focus of an interview. As with much of Pawson’s earlier work, this sounds like a renewed critique of surveys and structured interviews, which do indeed lack depth. Yet, this appears more unique in 1997 than it does today. As even experimental-friendly Howard White and Daniel Phillips (2012: 28) note, ‘interviews should be carefully planned, semi-structured and targeted to explore key parts of the causal chain.’

To my mind, there are no issues with theory-based, sequential sampling, iteratively developing theory, tailoring interviews, or providing some sense of context and what the expected outcome may be to interviewees. These are all good things. The problems arise with being explicit about your theory and to whom you’re explicit about you’re theory. If you’re evaluating a project, there are many benefits to a participatory or partner-led approach in which you develop a theory of change or causal chains to test how and why an intervention may have worked with a project team. It may even be desirable for the project team to conduct the interviews, but you can’t ignore the various biases inherent in small n research (White and Phillips, 2012), especially if you pursue the teacher-learner cycle.

Being explicit about an evaluator’s theory with interviewees contributes to at least five potential issues:

  1. May wrongly assume evaluator and interviewee have comparable power;
  2. Could impose the evaluator’s conceptual system on the interviewee;
  3. May make the interview process more extractive;
  4. Likely increases risks of courtesy and confirmation bias, and;
  5. Likely increases risks of cherry-picking to suit an evaluator’s preferred explanation.

Particularly in the international development sector, evaluators and interviewees rarely have comparable levels of power. Pawson’s teacher –learner cycle appears to assume relatively equal power between interviewer and interviewee because interviewees require a high level of self-confidence to question an evaluator’s theory. As Ana Manzano (2016: 12) points out, the ‘interviewee is not supposed to control the direction of the conversation, it is the interviewer who will aim to take control and to steer the interview towards his/her topic of interest.’ Pawson’s (1996: 304) elitist goal of recalibrating the ‘division of expertise in the interview’ means that the interviewer is supposed to be fully in charge. Power shifts (even further than it generally is) towards the interviewer who is encouraged to steer the interview to confirm, falsify, or refine their theory. It’s argued that:

‘At some stages the interviewer is teacher (“here is an element of programme theory”) and at other times the interviewee is (“and here’s how it does or doesn’t work here”). The idea is that the interview evolves into a discussion (Greenhalgh et al. 2017: 1).’

Such a power-neutral expectation of teacher and learner relations seems optimistic, at best.

The emphasis is officially on refining theory, rather than confirming theory. However, theory refinement is a real skill, and requires a good deal of humility and a substantial amount of time investment. I’ve read various books on qualitative interviewing (including those Manzano critiques) and conducted hundreds of interviews. Yet, I’d argue it’s a skill I’m still mastering. I’m comfortable interviewing people like me, and I will “naturally” favour the views of technocrats, like clever doctors. Recognising this, I may not be the best person to interview community members in a far-flung land I don’t know or have never even visited. The wider problem is that:

‘Evaluators typically speak mostly to staff of the agency that they are evaluating, staff of other relevant external agencies, and selected government officials. They often conduct very limited work in the field, and rarely speak to parliamentarians, traditional authorities (such as village chiefs), trade unions, journalists and other key actors (White and Phillips, 2012: 27).’

Most people think they’re good interviewers. In my experience, most aren’t. Theory-refining without significantly biasing responses requires an exceptional level of evaluator skills and development of an extremely high level of trust with their interviewees (which may require iterative interviewing). Without these, only the most assertive interviewees are likely to challenge the interviewer’s interpretation.

In the international development sector, these difficulties are compounded if white English-speaking men like me have to work through interpreters. Albeit not necessarily the case, control over the quality of interviewing may also diminish when this is outsourced to local universities or consultancy firms (as one well-known RE titan told me). Lemire et al.’s data show that only 8% of studies reviewed were in Africa. Over half were from the UK and over 3/4 were from the health sector. So, a lot of what we know and have come to expect in RE comes from a country and a context with some of the highest level research standards.

When research skills are lower, there are good reasons to choose either structured interviews or semi-structured interviews with open questions, and in either case, to hide your hand rather than showing all your cards. I’ve had various occasions when a sub-contracted researcher has asked biased follow up questions anyway (one assumes, due to contract renewal bias), making interview data largely unusable.

Realist evaluators are recommended to do explicit theory-testing with policymakers and only implicit theory-testing with programme participants. The reason for this is because we assume policymakers know more about programme design than participants. It appears to have little to do with considerations of power asymmetries. Yet, these asymmetries matter. On one hand, the interviewer has a licence to frame issues as they see fit and may unconsciously bias questions to conform to the hypothesised CMO configuration (observer-expectancy effect and confirmation bias).

More problematically, we might anticipate interviewees to exude authority and courtesy biases. It’s thus worth reflecting on whether participants “learned” the programme theory and are merely reciting it back to you. Interviewees are likely to be seduced by isomorphic mimicry and courtesy bias, whereby they imitate what they expect the interviewer wants to hear. This is especially likely if that interviewer is a white evaluator whose positive findings authorities or communities believe may bring more money by writing a pleasing report to donors about a project working well.

When power relations are unequal, the likelihood that an interviewee will challenge (and therefore falsify) the theory is much diminished, even if they don’t agree with it. Biases vary according to context. Courtesy bias has been found to be stronger in Asian and Latin American countries (White and Phillips, 2012). In my experience in Latin America, I’m not sure how true this is for elites, but when race and class enter the picture, it certainly is. However, I’ve yet to read guidance in realist interviewing which engages with this issue in any depth. I’m keen to read such guidance if it does exist.

This mimicry is all the more likely if interviewees perceive their story is not being heardby the interviewer. What is an interviewee going to teach you if they believe you’re not really listening to a lot of what they have to say?

How do you minimise bias in a realist interview?

Rather than presenting your theories in full or all at once (as this can mute the conversation), we’re encouraged to present them in digestible chunks. Below I’ll summarise what might be considered common steps (see Pawson and Tilly, 1997: Manzano, 2016; Greenhalgh et al.2017) with a few twists of mine, and I’ll comment on the challenges.

  • Step 0: Ask about context

Start with a question about the individual’s wider life, family etc. AND questions about the context. As Chris Roche reminded me recently, it’s slightly bizarre that an approach which explicitly focuses on context doesn’t necessarily start with a question on context and how interventions are situated within this. So, we should do this up front.

  • Step 1: Ask general and open questions

Interviews should start with general questions about interviewees’ role in, experiences of and views about the programme. In the first round of interviews, questions should be mainly exploratory. This is the stage of theory-gleaning, which will be primarily with higher-level status practitioners. Questions might include an appraisal of the interviewee’s contact with the programme, the outcomes of the programme for different groups.

  • Step 2: Explore the participant’s initial explanations

Subsequent questions follow up their responses, asking them to tell their stories about specific experiences or issues with the programme, its participants, and constraints. These experiences can be from higher-level staff but are more likely to be from frontline staff and service users.

  • Step 3: Ask about theory “topics”

Before making specific theory propositions, you should present various theories loosely. There should be plenty of “active listening” here. This, at least, gives the semblance of impartiality and objectivity (even if various interviewees are able to see through this and pick up the tone of the interviewer).

  • Step 4a: Propose alternative theories for the same outcome

Rather than presenting your own theory, hoping to confirm it, you should test multiple, including contradictory, theories about the same aspect of the programme with the same respondent. This can help narrow down candidate theories and allows respondents to contribute to theory development. How explicitly you present theory propositions is debatable. Personally, rather than presenting theories, I would only report back my interpretations of potential theories derived from what the interviewee says themselves, seeking clarification.

  • Step 4b: Present partial theories and ask respondents to confirm, refute or refine

At the same time, you present digestible chunks of your candidate theories. It’s argued you should explicitly introduce and test specific elements of programme theory with respondents at this point. It may be that these are just fragments which accumulate.

‘The realist investigator tries to understand how each fragment of evidence contributes to their interpretations and explanations and how their ideas are tested and refined within and between those fragments (Greenhalgh et al. 2017: 1).’

  • Step 5: Present fuller theories and ask respondents to refine further

Particularly by this point, the evaluator should have quite a lot of knowledge about the programme and its nuances. So, questioning further evolves and becomes more tailor-made to refine specific context-mechanism-outcome (CMO) configurations. These configurations are supposed to be made explicit to the interviewee (whether the CMO language is used or not). Fuller theories might perhaps be presented during follow-up interviews with key informants.

Asking open questions, presenting alternative theories, asking for examples and further explanation and presenting theory topics are not problematic. For me, the issues come when the interviewer starts to reveal their candidate theories (step 4b), and especially when attempting to refine and consolidate primary theories (step 5). It’s surely very easy to get these steps wrong, especially when this comes from closed questions.

Gill Westhorp and Ana Manzano (2017) have developed a very helpful ‘Starter Set’ of Questions for realist interviews. However, when the interviewer explicitly reveals large chunks of their theory, the angle inevitably changes. Manzano (2016: 16) provides the following example:

“Evaluator: Why do you think this patient was discharged to a nursing home? Do you think he could have gone to his own home instead of a nursing home? And I am saying this because one of the theories about this policy is that to accelerate hospital discharges, is sending people into care homes too soon. Right?”

It would be quite hard to say no to this, because it’s a leading question with strong normative disapproval (“too soon”) baked into it. As the interviewee, what do you think is the socially desirable response? Is that what the evaluator is looking for?

This also depends on how the interviewee was primed. At this point, the evaluator is refining their theory. We’re told that, in this case, some patients consider care homes a safer option than returning home, especially after a long period of acute illness. So, may we assume the nurse has already been told about this? May we also assume that accelerating hospital discharges is not viewed positively? Whether they have been told or not and how negatively the question is framed can both make a huge difference to the nurse’s response.

Discharge Liaison Nurse: “Ummm…I think at the time, this patient could have gone home and managed at home with a big care package. We could have organised three to four home care visits a day- and one visit in the nighttime. I think, he has gone into care because he’d been in hospital for a long time and he was scared about being on his own. Plus, he had some medical issues which needed monitoring. And I think, possibly, it was also peace of mind.”

The response does shed light on additional details about the discharge, but the way the question is framed leads the nurse almost inexorably to a less socially undesirable (but possibly less accurate) response. And I can’t help wonder about the “Ummm.”

I remember once asking the director of a think tank to take a side and he responded to me: “Soy muy gris (I’m very grey).” That is to say, “I’m not prepared to make a definitive judgement on this topic; don’t force me to pick a side.” My worry is that in revealing our theory (and unnecessarily) that’s exactly what we’re asking interviewees to do. So step 4a might be far as Process Tracing can reasonably go.

Let me know what you think, whether you think I’m being unfair, or whether you share some of my concerns. Thanks for reading.

--

--

Thomas Aston
Thomas Aston

Written by Thomas Aston

I'm an independent consultant specialising in theory-based and participatory evaluation methods.

Responses (1)