…looking to score!
I keep reading reports saying we need to improve the evidence base for involvement – but what does that mean… is it even possible?
It’s almost become a bit of a habit. We’re so used to the culture of evidence-based medicine, that it seems we feel the need to develop an evidence-base for everything! And the only evidence that counts, of course, is statistical data…
I understand how such an evidence-base helps with making decisions about healthcare. People at all levels – from the individual patient, through to NHS organisations and policymakers – all want reliable, statistical data to support them in making ‘the best decision’. But even then other factors will also come into play.
So I think we’re mistaken if the expectation is that an evidence-base for public involvement will give us the same kind of predictive information. I think people hope that such ‘robust data’ would help us be certain as to which projects will benefit from involvement, which approach will be most useful, and how best to do it.
But involvement doesn’t work like a health intervention. It is possible to quantify and measure its impact, but for all the reasons I describe in a recent journal article, the findings from such scientific approaches may not be generalisable. Context is everything with involvement – so what you learn in one context through a carefully constructed randomised controlled trial, might not then apply to other settings.
I’m suggesting it’s time to go cold-turkey. Let’s stop worrying about the evidence base. Let’s stop thinking we have to do an RCT to prove every aspect of how involvement works. I think there are other, more useful ways to learn about involvement – ways that are based on gaining wisdom and insight through experience – does that make it an art rather than a science?
… what’s the problem?
I don’t think we make this clear…
What involvement does is bring new knowledge and a different perspective to research – it’s information that’s ‘new’ to researchers and a world-view that’s different to their own. When you look in detail at how involvement makes a difference, it often fills a gap in researchers’ knowledge, or corrects an assumption they’ve made, or identifies a problem they haven’t anticipated. It reveals what the researchers don’t know – the unexpected.
At the beginning of any project, the researchers don’t know what they don’t know – and don’t find out until they’ve involved patients. How would they know, for example, that the wording of their recruitment letter was putting people off, until a patient read it and told them?
I think this is why some researchers haven’t understood what involvement will do for them. They don’t even realise they’ve got a problem, so don’t perceive a need for the ‘involvement solution’. We try to persuade researchers to do involvement by telling them about the benefits for research – maybe it would be more effective to help them realise there are things they don’t know that patients do.
The value of the researchers’ experience
It’s anecdotal. It’s weak evidence – it’s just a bunch of stories!
These are criticisms sometimes levelled at the patients’ perspective. I think it’s interesting that the same criticisms are currently being made of researchers. I keep reading reports stating that the published evidence of the impact of involvement is weak and anecdotal – because it’s mostly made up of researchers’ ‘stories’.
But working in patient and public involvement we’re always talking about the value of experiential knowledge. Surely it can’t be that ‘patients’ stories’ have value, but ‘researchers’ stories’ don’t?
We don’t respond to these criticisms of the patient experience by arguing that what’s needed is more rigorous and robust research into patients’ lives. So why do we respond to these criticisms of the researchers’ experience by arguing that more robust measures of impact are required?
Maybe we need to listen to our own arguments on this one? We’ve nearly twenty years of experience and hundreds of researchers’ and patients’ ‘stories’ of involvement to reflect on – could we be doing more to draw out their valuable insights and learning?
Public involvement is not an intervention
Perhaps it’s not surprising – given that we’re all working in the world of health research whose fundamental purpose is to support evidence-based medicine – that we can’t help but think of involvement as an ‘intervention’ and seek ‘evidence’ of its impact. The questions we ask about involvement are the same ones we ask about new treatments:
- Does it work?
- What difference does it make and can we measure that?
- What are the benefits and risks?
- Is it cost-effective?
I think we’re getting stuck with this thinking – because involvement isn’t an intervention. There’s not a precisely defined, standardised action which is public involvement – it’s so varied and complex and highly dependent on context. You can’t just add two patients to a steering committee and predict a specific outcome…
I find it more helpful to think about involvement as a conversation. It’s an exchange of ideas, values, assumptions and experiences between researchers and patients. It’s an ongoing dialogue – an interaction that evolves over time. When you think about it this way, those questions don’t seem so appropriate.
- Does a conversation work?
- What difference does a conversation make? Can we measure that!?!
- What are the risks and benefits of a conversation?
- Is it cost-effective to have a conversation?
The answers become ‘Well, it depends’. Mostly it depends on why you want to have the conversation and what you hope will come out of it. It depends on what the conversation is about, what gets said, who takes part, where and when it takes place, how it’s done, and importantly what people knew before they started – these are all the factors that influence the impact of involvement. You can’t neatly separate out the outcomes from its purpose, context and the way it’s carried out – these are all inter-related.
So I think we should be asking different questions about involvement, something along the lines of:
- How does it work? What useful things get said?
- What kinds of learning come out of it?
- What makes it go well?
- When and where does it work best
Or, are there other questions?
A gap in involvement policies & practice (#1)
I’ve been reviewing a number of involvement policies recently, and now having read through a few, I’ve noticed two gaps I’d not spotted before. I’ll talk about the first one in this post and the second in the next…
One organisation’s policy was all about involving patients and the public in their decision-making committees. At first glance, I thought it was superb! It ticked all the boxes for good practice – a job description and person spec, a fair and transparent recruitment process, excellent training and support. But then I realised – it only talked about patients and members of the public. There was nothing about selecting, preparing or supporting the other committee members.
Involvement is a partnership. So an involvement policy that only talks about one partner is like a marriage-guidance counsellor only talking to one spouse! Both partners need to be encouraged to find ways to work together, to draw on their respective strengths and to develop mutual trust and respect.
I worry that this focus on patients/ the public gives them all the responsibility for making the partnership work – as if it’s just them that makes the difference – when actually it’s about how well everyone round the table works together. Shouldn’t all involvement policies include the ‘other-half’?
The researchers’ experience
Involvement in research is essentially a conversation*. It’s about one group of people talking with another group of people – clinicians/ researchers/ research staff talking with patients/ carers/ members of the public. Through their conversation, they share ideas, knowledge, opinions and experiences.
So when it comes down to it, the impacts of involvement often start by changing what the researcher thinks. I’ve heard researchers describe this as a ‘light-bulb moment’. This causes them to act differently, do differently and/ or communicate differently – which ultimately has an impact on the research those people do.
The impact of involvement then always depends on where the researcher starts out – what assumptions, skills, values and knowledge they bring to the table. If the researcher starts with the wrong assumptions about what matters to patients, then talking to patients will give them very different ideas about what questions need to be asked, or what outcomes need to be measured. If the researcher is pretty good at writing in plain English, then involving patients in reviewing their patient information might not make much difference to how easy it is to understand.
The final outcome will also depend on what the researchers experience and do in response – what they hear, learn and value from the conversation, and ultimately what changes they decide to make. The main impact of involvement is therefore on researchers – but we rarely talk about that. We talk about impacts on the research as if it’s separate from the people doing it. Maybe we need to hear more about the researchers’ experiences to gain a deeper understanding of how involvement works?
*except perhaps in the context where patients carry out the research themselves.
How do we persuade researchers to do involvement? I think involving patients in research is a bit like being on a roller-coaster – no, not because of its ups and downs – but because you’ve got to experience it for yourself to really know what it’s like.
In the same way that patients provide insights based on their direct experience of a health condition, I think understanding the difference patient involvement makes to research is a form of experiential knowledge. You’ve got to do it – to get it.
Until you’ve sat and talked with a patient and had that lightbulb moment – that reality check which makes you realise ‘You know what – I’ve been thinking about this all wrong’ – until you’ve had that experience, you may not get what involvement means – no matter how much information you’ve been given about it. Researchers who’ve had this experience tend to be the enthusiasts!
So with the roller-coaster… you could know every measurable fact about it, the speed of the ride, the length of track etc. You could know everything about the impact – the changes in cortisol levels, and the endorphin rush – but none of this tells you what it’s like to be on it, or why some people want to do it again and again…
What persuades people to go on a roller-coaster? Watching from the side, hearing someone else rave about it, having someone else go with them on their first ride? What then are the lessons for encouraging researchers to do involvement? Maybe it isn’t about giving them more ‘robust evidence’. Maybe it’s about giving them the bottle to give it a go…
Has anyone tried this kind of approach? I’d love to know if it works…
Image from www.coasterimage.com