Evaluation is a challenge for statutory, voluntary sector services
Joe Green, Senior Population Health Analyst at Optum, reflects on the challenges involved in evaluating innovative projects across health and social care and describes some of the advice and support available to help practitioners.
Fundamentally, everyone in the NHS starts from a good place: we all want to improve care and achieve better outcomes for patients. As a result, every new service or intervention we put in place invariably begins with the aim of making something better.
This might involve:
- Improving a clinical outcome
- Preventing avoidable ill health
- Reducing demand on other services
It might even be as “simple” (although it’s often anything but) as getting patients to re-engage with a service or connecting them with wider sources of support.
But are we actually making the difference we think we’re making?
Given the complex environments in which we work, it’s often difficult to prove that a service is working effectively and achieving what it’s set out to do. This is especially true when there may be a host of intermediate steps involved in achieving these end goals and multiple other variables that might affect the result.
That’s why evaluation is such a big challenge both for statutory and voluntary sector services. Everyone knows it’s the right thing to do. But it is difficult. It takes time. It can be complicated. And it’s quite easy to get wrong.
10 evaluation “must-dos”
So how do we make it work better in practice? Last month, we had the pleasure of hosting more than 50 colleagues from across the NHS to explore realities of evaluating proactive care interventions.
What emerged from our conversations were a core set of evaluation “imperatives” — those must-do, practical things we’ve all learnt (sometimes painfully!) that have to happen if an evaluation project is to work.
If I had to summarise them, these are the 10 most important principles drawn from people’s practical experiences:
- Build evaluation in early — and react fast as and when other data needs emerge.
- Create a common language across partners to describe what good looks like.
- Be realistic and manage people’s expectations of what can be achieved.
- Make sure outcomes are co-produced and agreed with the communities themselves.
- Simplify and streamline the indicators you’re using to measure projects against.
- Make it as easy as possible for people to collect the data needed.
- Identify and celebrate the “small wins” showing your progress towards a bigger goal.
- Look out for “the golden thread” that links your service outcomes with organisational goals.
- Use qualitative and quantitative approaches to understand impact — tell stories.
- Share and act on the insights by doing more of what works and less of what doesn’t.
I’d perhaps add a further overarching lesson — that evaluation needs to be mainstreamed and integrated as much as possible into the way people work. This is particularly important when it comes to recording information on when and where the interventions happen, and to whom: if you can get clinicians recording this data on clinical systems (using SNOMED, OPCS and other codes), it becomes part of their day-to-day work and is less likely to get forgotten on even the busiest of busy days.
Shared challenges we continue to face
But although these pieces of acquired wisdom can get us a long way, they don’t necessary solve every issue in evaluation practice. Many of the questions that cropped up in the webinar reflect the troublesome practical challenges we all continue to face as evaluation practitioners:
- How do you free up the resource and headspace necessary to work through your evaluation plans and get all partners on the same page — particularly in projects where some of the burden of data collection may fall to small voluntary organisations?
- How do you tackle more systematic issues with data quality where you may have limited influence or control — for example, in improving how accurately healthcare professionals code things on their clinical systems.
- And how do you achieve buy-in, support and understanding among senior leaders, who are critical to making sure evaluation is prioritized within a project or programme, but don’t always grasp the human effort required to deliver it?
There were also some more profound questions that go to the heart of why evaluation is so fundamental to the future of our health and care system.
- What does good look like when it comes to showing the impact and ROI of preventative services?
- How might we best evaluate and evidence our progress in tackling different health inequalities?
- And how can we demonstrate, robustly and scientifically, the potential value of reallocating resource from one part of the system to another?
These are game-changers because they can help us make the bigger, evidence-based decisions about the way we shape our services around what works. But they’re extremely difficult to answer with the limited time and resource that’s often available.
About Joe Green
Joe is a senior healthcare data analyst with more than 15 years’ experience of working in health analytics across local authority, public health, NHS commissioning, and secondary care.
This article was prepared by Joe Green in a personal capacity. The views, thoughts and opinions expressed by the author of this piece belong to the author and do not purport to represent the views, thoughts and opinions of Optum.