…looking to score!
I keep reading reports saying we need to improve the evidence base for involvement – but what does that mean… is it even possible?
It’s almost become a bit of a habit. We’re so used to the culture of evidence-based medicine, that it seems we feel the need to develop an evidence-base for everything! And the only evidence that counts, of course, is statistical data…
I understand how such an evidence-base helps with making decisions about healthcare. People at all levels – from the individual patient, through to NHS organisations and policymakers – all want reliable, statistical data to support them in making ‘the best decision’. But even then other factors will also come into play.
So I think we’re mistaken if the expectation is that an evidence-base for public involvement will give us the same kind of predictive information. I think people hope that such ‘robust data’ would help us be certain as to which projects will benefit from involvement, which approach will be most useful, and how best to do it.
But involvement doesn’t work like a health intervention. It is possible to quantify and measure its impact, but for all the reasons I describe in a recent journal article, the findings from such scientific approaches may not be generalisable. Context is everything with involvement – so what you learn in one context through a carefully constructed randomised controlled trial, might not then apply to other settings.
I’m suggesting it’s time to go cold-turkey. Let’s stop worrying about the evidence base. Let’s stop thinking we have to do an RCT to prove every aspect of how involvement works. I think there are other, more useful ways to learn about involvement – ways that are based on gaining wisdom and insight through experience – does that make it an art rather than a science?