Blog post

The Phantom Menace awakens

Paul Taylor-Pitt, assistant director of organisational development at NHS Employers, reflects on evaluating organisation development (OD).

4 June 2020

Many moons ago, when we set evaluating OD as one of our five priorities for the year, I sighed. It was the kind of sigh you give after watching a particularly disappointing film. Remember that feeling after watching Star Wars Episode I: The Phantom Menace!

I think I saw evaluation as one of those necessary evils. To me it’s the Darth Vader of the OD world. I appreciate that without it there’s no story, but it doesn’t half bring the mood down sometimes.

One of the things I’ve learned is that I’m not alone in this feeling. I ran a session on evaluation with a group of OD practitioners and wanted to get a sense of their relationship to it. I asked them to think of a film that would sum up what evaluation means to them. The list included Dangerous Liaisons, Weird Science, Much ado about nothing, The Green Mile, Clueless and Misery. What does that tell you?

Evaluation is one of our top priorities because people struggle with it and we want to help. Since the launch of Do OD in 2013, one of the questions I frequently come across is, ‘how do we evaluate OD?’ I also (and still do) find evaluation a bit of a struggle. By applying some focus and energy to this topic, we hope to help people feel more confident and capable in their evaluation practice.

This in turn will help us to demonstrate our impact and value more effectively. I hope that it will contribute to a growth in our capability and confidence, it might even shape how we do OD. At times of change OD is more essential than ever. The ability to measure and describe how we add value to our organisations enhances the credibility of our work. So why do we find it difficult, and what can we do about it?

Weird Science?

We’ve been working with a great group of OD practitioners over the last few months, diving deep into the topic of evaluation. When we explored some of the reasons why we find evaluation difficult we uncovered a lot of stuff. Things like ‘we don’t have time’, ‘it’s in the too difficult box’ and ‘we’re not great with numbers’. Alongside that is a palpable sense of frustration that we know we need to be better at evaluation, we just need something to shift.

This got me wondering, if we’re thinking of evaluation as ‘weird science’ and our prevailing emotion is ‘misery’, does that affect how we evaluate our work?

Otto Scharmer, inventor of Theory U, says: “The success of an intervention depends on the interior condition of the intervenor”.

I love that quote. Now think about how you approach evaluation. When you talk about evaluation to colleagues do you notice a shift in the tone of the conversation? There’s a risk that our collective community response to evaluation is a big old sigh instead of a rousing cheer. So how can we shift our response?

Gervase Bushe’s work on dialogic OD contains some helpful insights. The Dialogic Model of OD is underpinned by the premise that the language and stories we use to describe an issue affects how we think about that issue, which in turn impacts the actions and decisions we make.

Those actions, when repeated, shape and form our culture. In other words, to change the prevailing culture, we have to change our actions. To change our actions we must change our thinking. We change our thinking by changing the language and stories of the community. Read Bushe’s 2013 article for a helpful summary of his ideas.

Bushe suggests a three-step process that begins with re-framing how we talk about an issue into one based on possibilities instead of deficits. When conversations take place within this possibility-centric frame, a new image emerges that generates new ideas. This generative image helps to change the conversation to a more positive and hopeful one. These conversations stimulate action which in turn changes the culture of the community.

So let’s give it a go and try a little experiment. Forget Star Wars episode I. Really try. Banish every appearance of Jar Jar Binks from your mind. There is no Phantom Menace. Feel better? Good!

Now imagine that our relationship with evaluation is ‘A New Hope’. Or to be even more current, ‘The Force Awakens’. Does the idea that evaluation could give us hope make it more appealing? How can we awaken our energy for evaluation? Can evaluation be a force for good?

What do others say?

The good news is we’re not starting with a blank page when it comes to ideas on evaluation. Many have walked this path before us, leaving a trail of wisdom. One of the most helpful things I did was to put on my thinking cap and go off to find inspiration in others’ experience. I journeyed through some of the literature available and started putting together my own checklist, plucking ideas and advice from wherever I found them. I can’t promise that my reading was exhaustive, in fact it’s really just the tip of the iceberg, but it’s a start. Here’s a short summary of some of the stuff I discovered.

As far back as 1978, Porras and Berg made the point that OD is concerned with human processes. They recommend an improvement in our evaluation methodologies, particularly in making our data sources more eclectic to better illustrate the spectrum of human processes. Our design and analysis of data collection needs to be more sophisticated, underpinned by a better understanding of the dynamics of the change process.

In 1981 Terpstra said: “For OD to progress beyond the faddish stage, some improvements must be made regarding current evaluation practices”.

Although this article is getting a bit long in the tooth, it’s full of timeless advice and wisdom. Terpstra helpfully talks about the importance of clarity at the early stages of intervention planning, noting the importance of dedicating time and energy to the formulation of clear outcomes and objectives for any piece of work. He notes ‘ambiguous goals and objectives lead to ambiguous conclusion’.

One central question comes up again and again that’s essential to ask: “Did the intervention deliver our intended objectives”

Meditate on that question for a minute. What does it bring up? Does it light any bulbs about your OD practice? It makes me notice that not all of my interventions have always had clear 3 objectives. It helps me to realise that maybe I wasn’t as clear about the intention of the intervention as I could have been. Setting objectives is core to good OD practice and essential for evaluation. How we measure the impact is a vital part of the conversation.

Metrics and measures

There’s a lot of debate about measures. It sometimes falls into the realm of qualitative versus quantitative and which is best.

Wagner (2002) argues that OD practitioners ‘must have a working familiarity with quantitative measurement tools’ and an awareness of our impact on the bottom line of the business. The article helpfully explains some of the characteristics of quantitative measurements (design, reliability, validity, variability) and gives tips on how to present quantitative data to an audience.

The quantitative stance isn’t the only game in town. People disagree with each other on this all the time. Strauss (1973) noted that ‘OD is likely to be evaluated in terms of gut reactions rather than dispassionate research’ and that we should be looking through a qualitative lens. So is it one or the other?

Finney and Jefkins (2009) in their report, Evaluating OD, highlight the need for a ‘third way’, a mix of quantitative measures that identify shifts and adds rigour to our data alongside qualitative measures that help us to surface themes and tell the stories that sit behind the numbers. I think this is a great approach.

We’ve been working on ways of helping people to have conversations about measures, and our Visual dialogue tool can be used as a conversation starter. It’ll help you think through quantitative and qualitative measures, both subjectively and objectively. You can use the tool to identify the kinds of data you use in your evaluation methodologies. What could it reveal? Everything in one box? A good balance across all four? A bias towards existing, easy-to-reach data?

ROI or ROI?

The issue of Return on Investment (ROI) is a recurring theme in conversations about evaluation. In fact, it’s one of the most commonly asked questions I get on OD when I’m out and about. I sometimes wonder though, is that the right question? ‘How do you measure ROI’ has a really straightforward answer. You work out ROI by dividing the financial benefit of an intervention by the cost and multiplying by 100. Simple. The complicated bit sits in the detail. There are other questions to ask.

  • Do we always know the cost of the intervention?
  • Whose job is it to define the financial benefits of our work?
  • If the answer to ROI is 3.4% is that better or worse than 1 per cent?
  • Would anyone believe if we said the ROI was 3,400 per cent even if we could show our workings?
  • Who is asking the ROI question and what are they really asking?

Measuring ROI is not easy, but it’s important, because it’s important to the people who commission our work. Maybe we do need to be better at the numbers, more rigorous in our costings and even more skilled at measuring outcomes. Imagine how good you’d feel if you could answer the ROI question quickly and confidently when it was posed.

I’m interested in both the numerical version of ROI (investment) and also a different way of approaching the conversation, where the ‘I’ stands for, intention. Are we always clear about the intention that sits behind our work? Can we show how our OD interventions contribute to the strategic goals of the organisation? Are there ways to illustrate the stories that underpin the progress we aim to make? If we think about Intention as well as Investment we can craft more nuanced and descriptive ways to demonstrate ROI, and ROI.

Evaluation is an intervention

Finney (2013) describes evaluation as an intervention in its own right and affirms the notion that there isn’t a one size fits all solution. Instead ‘every evaluation will require an element of problem solving, thought, analysis and creativity’.

One of the central themes in our field of practice, and so therefore also central to evaluation, is the use of ‘self as instrument’. If you haven’t already read Mee-Yan Cheung-Judge’s article, The Self as an Instrument – A Cornerstone for the Future of OD, I strongly recommend you do.

The use of self is at the heart of our 3P Capability model, created by NHS OD practitioners to help us reflect on our own abilities. The capability model has three overarching areas that help build our capability.

1. Purpose
2. Presence
3. Practice.

Sitting behind those three areas are nine elements that contribute to our capability, these are:

  • clarity, coherence and impact - in relation to our purpose
  • competence, confidence and identity - building blocks of our practice
  • power, profile and positioning - contributing to our presence.

Together those elements shape our feelings of agency, instrumentality and passion - vital attributes in our growth and development.

You can use the capability model as a tool to reflect on your evaluation practice. Ask yourself:

  • what strengths do you already have in the domains of ‘purpose, presence and practice’?
  • which areas might need a bit of polishing?
  • what new ones might you need to acquire?

By focusing some attention on ourselves, we can get a better sense of where our strengths lie and where we might need to pay some attention. Sharpening our instruments, polishing our gems and taking time to reflect on our own practice is key to our success.

What have I concluded?

Evaluation is complex, because our work is complex. We don’t work in simple organisations where everything is predictable. Therefore our approach to evaluation needs to be sophisticated and multi-faceted.

We evaluate for many different reasons. Maybe it’s because someone asks us to. It could be because we need to add a section into the annual report. Perhaps we’re looking for evidence to include in an award entry. We might need to put together a business case for investment.

Evaluation is an opportunity to learn about ourselves and the system we work in. Most importantly for me evaluation is a mechanism for illustrating the connection between our work and patient care, highlighting where we add value to the goals of the organisation.

There’s some ‘evaluation essentials’ that I discovered.

  • The importance of having clearly defined objectives from the start.
  • Matching our interventions to concrete organisational outcomes helps.
  • Be creative in the use of data, both quantitative and qualitative.
  • Think about different ways of sharing your findings. Be bold.
  • Now and again, look in the mirror and reflect on the strengths we have and where we might need to do some work.

Evaluation isn’t easy and we sometimes avoid things we find difficult. I know I do. The main thing I've taken away from my exploration of evaluation is that like OD itself, it's down to us to sharpen our instruments.

We're already doing great work. There are shining bright examples of change and progress across the NHS. By focusing some attention on how we evaluate our work we can continue to show the value we add to patient care. If we are brave enough to look into the areas we find difficult we can brighten up those darker spaces and come out feeling more confident and capable in our work. Our work in OD, like Star Wars, is a mix of the darkness and the light. It’s time to let the force awaken.