It matters who you involve


Whose experience is most valuable?

I’d like to suggest that to ensure we get the most out of involvement, we need to involve people with direct experience of whatever condition is being studied. If you’re studying substance misuse amongst homeless people, you need to talk to people who’ve had that exact experience – it’s not that any mental health service user will do…

I’m basing this conclusion on a recent evaluation of the work of the FAST-R panel – a panel of mental health service users and carers who review patient information sheets, protocols and questionnaires for mental health researchers. (A journal article about the evaluation has just been published).The evaluation involved reading every single comment the Panel had made on 85 studies over a period of three years.

Their comments fell into three different categories. The first category included comments about making the information clear and easy to understand. They were all about rewriting the documents in plain English and changing the format. Any lay person might have done this – others might even argue that a science journalist could do it!

The second category included comments that I’m going to call general patient/ carer comments. These related to issues that many patients would know about – for example, checking that researchers had enough money in their travel budget for participants to take taxis when needed. In this particular example, Panel members also reminded researchers of the need to be sensitive to service users’ concerns about stigma and discrimination – issues that your average member of the public might not have picked up, but ones that people with a range of mental health problems would know about.

The third and final category included what I’m going to call patient/ carer expert comments. These were comments that were based on the unique insights of people with experience of a specific health problem. One example came from a review of a study of brain activity in people with schizophrenia. The information sheet explained that music would be played while people were in the MRI scanner. One of the Panel members with schizophrenia commented that if he were experiencing paranoia, he’d need to know exactly which piece of music was going to be played ahead of time.

So I’m concluding from this evaluation that if you want comments at all three levels you need to involve people with the right kind of experiential knowledge. Is this happening? I’m not sure. I’ve noticed that lots of panels and groups are being set up to support involvement across a wide range of research studies, and I’m wondering if sufficient attention is paid to matching people’s experience to the projects they’re asked to comment on.

Some people might say this doesn’t matter. If such panels are making the information clearer and participation in research easier, then that’s already a great improvement on what’s gone before. But I think those patient/ carer expert insights, are like the icing on the cake, the detail that might make all the difference.

Understanding the differences between these contributions I suggest is crucial to understanding the purpose of involvement. It’s not only about making research lay-friendly – it’s about making research relevant and acceptable to specific groups of patients. We may need to think more carefully about whose experiential knowledge is going to be most valuable in any particular study – so we can be sure to involve the people who have the most relevant experience.


The power of the anecdote

top story

Why patients’ stories work

I just googled ‘the power of the anecdote’ and two sites came up that exactly illustrate the problems we have with this in public involvement in research.

The first site, Ben Goldacre’s Bad Science, talks about how anecdotal reports of the effects of treatments can be potentially misleading, while clinical trials provide the best estimate of the true benefits of a drug. Of course this is right – it’s the reason we do research and why we support the development of evidence-based medicine.

However, when the patient perspective is brought into the research world, some researchers want to apply the same rules. They can dismiss patients’ stories, because they’re not good ‘evidence’. It seems researchers are not quite sure how to use the experiential knowledge that patients provide.

This is where the lessons from the second site ‘Presentation Pointers’ come in. This site encourages the use of anecdotes in presentations because they say anecdotes are one of ‘the most powerful communications tools ever discovered’. I think this describes the true value of patients’ stories. They have the power to communicate and therefore the power to challenge researchers’ assumptions.

I can tell you a great story to illustrate this point. I recently interviewed a researcher who told me how a patient’s anecdote had had a dramatic impact on a NICE committee evaluating new treatments. This committee was reviewing two forms of insulin for the treatment of diabetes. On paper, the clinical data suggested that both forms were equally good at reducing blood sugar, but the newer version cost more money – suggesting it wasn’t any more cost-effective. However, a diabetes patient who was at the meeting, alerted them to an important difference between the two forms – a difference they weren’t aware of. He explained that the older, cheaper version was more likely to result in hypoglycaemic attacks, and he said ‘Sometimes I don’t take my insulin at night, because I’m afraid I might not wake up in the morning.’ This statement challenged the committee’s assumptions about benefits. It sparked a ‘lightbulb moment’ in a way that a report of the patients’ experience describing ‘non-adherence to treatment’ might not have done.

I think patients’ stories work precisely because they’re anecdotal. If we try to turn them into evidence – by researching patients’ views and producing technical reports – we are in danger of losing their impact and value. We need stories in the patients’ own words, and they are probably best spoken by patients.

Anything else is dis-empowerment.

We’re just evidence-base junkies…


…looking to score!

I keep reading reports saying we need to improve the evidence base for involvement – but what does that mean… is it even possible?

It’s almost become a bit of a habit. We’re so used to the culture of evidence-based medicine, that it seems we feel the need to develop an evidence-base for everything! And the only evidence that counts, of course, is statistical data…

I understand how such an evidence-base helps with making decisions about healthcare. People at all levels – from the individual patient, through to NHS organisations and policymakers – all want reliable, statistical data to support them in making ‘the best decision’. But even then other factors will also come into play.

So I think we’re mistaken if the expectation is that an evidence-base for public involvement will give us the same kind of predictive information. I think people hope that such ‘robust data’ would help us be certain as to which projects will benefit from involvement, which approach will be most useful, and how best to do it.

But involvement doesn’t work like a health intervention. It is possible to quantify and measure its impact, but for all the reasons I describe in a recent journal article, the findings from such scientific approaches may not be generalisable. Context is everything with involvement – so what you learn in one context through a carefully constructed randomised controlled trial, might not then apply to other settings.

I’m suggesting it’s time to go cold-turkey. Let’s stop worrying about the evidence base. Let’s stop thinking we have to do an RCT to prove every aspect of how involvement works. I think there are other, more useful ways to learn about involvement – ways that are based on gaining wisdom and insight through experience – does that make it an art rather than a science?

If involvement’s the solution…

Capture… what’s the problem?

I don’t think we make this clear…

What involvement does is bring new knowledge and a different perspective to research – it’s information that’s ‘new’ to researchers and a world-view that’s different to their own. When you look in detail at how involvement makes a difference, it often fills a gap in researchers’ knowledge, or corrects an assumption they’ve made, or identifies a problem they haven’t anticipated. It reveals what the researchers don’t know – the unexpected.

At the beginning of any project, the researchers don’t know what they don’t know – and don’t find out until they’ve involved patients. How would they know, for example, that the wording of their recruitment letter was putting people off, until a patient read it and told them?

I think this is why some researchers haven’t understood what involvement will do for them. They don’t even realise they’ve got a problem, so don’t perceive a need for the ‘involvement solution’. We try to persuade researchers to do involvement by telling them about the benefits for research – maybe it would be more effective to help them realise there are things they don’t know that patients do.

More than a bunch of stories…


The value of the researchers’ experience

It’s anecdotal. It’s weak evidence – it’s just a bunch of stories!

These are criticisms sometimes levelled at the patients’ perspective. I think it’s interesting that the same criticisms are currently being made of researchers. I keep reading reports stating that the published evidence of the impact of involvement is weak and anecdotal – because it’s mostly made up of researchers’ ‘stories’.

But working in patient and public involvement we’re always talking about the value of experiential knowledge. Surely it can’t be that ‘patients’ stories’ have value, but ‘researchers’ stories’ don’t?

We don’t respond to these criticisms of the patient experience by arguing that what’s needed is more rigorous and robust research into patients’ lives. So why do we respond to these criticisms of the researchers’ experience by arguing that more robust measures of impact are required?

Maybe we need to listen to our own arguments on this one? We’ve nearly twenty years of experience and hundreds of researchers’ and patients’ ‘stories’ of involvement to reflect on – could we be doing more to draw out their valuable insights and learning?

You can’t just ‘take two of these’…

patients 2

Public involvement is not an intervention

Perhaps it’s not surprising – given that we’re all working in the world of health research whose fundamental purpose is to support evidence-based medicine – that we can’t help but think of involvement as an ‘intervention’ and seek ‘evidence’ of its impact. The questions we ask about involvement are the same ones we ask about new treatments:

  • Does it work?
  • What difference does it make and can we measure that?
  • What are the benefits and risks?
  • Is it cost-effective?

I think we’re getting stuck with this thinking – because involvement isn’t an intervention. There’s not a precisely defined, standardised action which is public involvement – it’s so varied and complex and highly dependent on context. You can’t just add two patients to a steering committee and predict a specific outcome…

I find it more helpful to think about involvement as a conversation. It’s an exchange of ideas, values, assumptions and experiences between researchers and patients. It’s an ongoing dialogue – an interaction that evolves over time. When you think about it this way, those questions don’t seem so appropriate.

  • Does a conversation work?
  • What difference does a conversation make? Can we measure that!?!
  • What are the risks and benefits of a conversation?
  •  Is it cost-effective to have a conversation?

The answers become ‘Well, it depends’. Mostly it depends on why you want to have the conversation and what you hope will come out of it. It depends on what the conversation is about, what gets said, who takes part, where and when it takes place, how it’s done, and importantly what people knew before they started – these are all the factors that influence the impact of involvement. You can’t neatly separate out the outcomes from its purpose, context and the way it’s carried out – these are all inter-related.

So I think we should be asking different questions about involvement, something along the lines of:

  • How does it work? What useful things get said?
  • What kinds of learning come out of it?
  • What makes it go well?
  • When and where does it work best

Or, are there other questions?

Where’s your ‘other-half’?

half apple

A gap in involvement policies & practice (#1)

I’ve been reviewing a number of involvement policies recently, and now having read through a few, I’ve noticed two gaps I’d not spotted before. I’ll talk about the first one in this post and the second in the next…

One organisation’s policy was all about involving patients and the public in their decision-making committees. At first glance, I thought it was superb! It ticked all the boxes for good practice – a job description and person spec, a fair and transparent recruitment process, excellent training and support. But then I realised – it only talked about patients and members of the public. There was nothing about selecting, preparing or supporting the other committee members.

Involvement is a partnership. So an involvement policy that only talks about one partner is like a marriage-guidance counsellor only talking to one spouse! Both partners need to be encouraged to find ways to work together, to draw on their respective strengths and to develop mutual trust and respect.

I worry that this focus on patients/ the public gives them all the responsibility for making the partnership work – as if it’s just them that makes the difference – when actually it’s about how well everyone round the table works together. Shouldn’t all involvement policies include the ‘other-half’?

Lightbulb moments


The researchers’ experience

Involvement in research is essentially a conversation*. It’s about one group of people talking with another group of people – clinicians/ researchers/ research staff talking with patients/ carers/ members of the public. Through their conversation, they share ideas, knowledge, opinions and experiences.

So when it comes down to it, the impacts of involvement often start by changing what the researcher thinks. I’ve heard researchers describe this as a ‘light-bulb moment’. This causes them to act differently, do differently and/ or communicate differently – which ultimately has an impact on the research those people do.

The impact of involvement then always depends on where the researcher starts out – what assumptions, skills, values and knowledge they bring to the table. If the researcher starts with the wrong assumptions about what matters to patients, then talking to patients will give them very different ideas about what questions need to be asked, or what outcomes need to be measured. If the researcher is pretty good at writing in plain English, then involving patients in reviewing their patient information might not make much difference to how easy it is to understand.

The final outcome will also depend on what the researchers experience and do in response – what they hear, learn and value from the conversation, and ultimately what changes they decide to make. The main impact of involvement is therefore on researchers – but we rarely talk about that. We talk about impacts on the research as if it’s separate from the people doing it. Maybe we need to hear more about the researchers’ experiences to gain a deeper understanding of how involvement works?


*except perhaps in the context where patients carry out the research themselves.

It’s a trip!


How do we persuade researchers to do involvement?  I think involving patients in research is a bit like being on a roller-coaster – no, not because of its ups and downs – but because you’ve got to experience it for yourself to really know what it’s like.

In the same way that patients provide insights based on their direct experience of a health condition, I think understanding the difference patient involvement makes to research is a form of experiential knowledge. You’ve got to do it – to get it.

Until you’ve sat and talked with a patient and had that lightbulb moment – that reality check which makes you realise ‘You know what – I’ve been thinking about this all wrong’ – until you’ve had that experience, you may not get what involvement means – no matter how much information you’ve been given about it. Researchers who’ve had this experience tend to be the enthusiasts!

So with the roller-coaster… you could know every measurable fact about it, the speed of the ride, the length of track etc. You could know everything about the impact – the changes in cortisol levels, and the endorphin rush – but none of this tells you what it’s like to be on it, or why some people want to do it again and again…

What persuades people to go on a roller-coaster? Watching from the side, hearing someone else rave about it, having someone else go with them on their first ride? What then are the lessons for encouraging researchers to do involvement? Maybe it isn’t about giving them more ‘robust evidence’. Maybe it’s about giving them the bottle to give it a go…

Has anyone tried this kind of approach? I’d love to know if it works…

Image from