… all down to luck?
If there aren’t methods for involvement and a representative sample of patients/ public isn’t required (see Staley & Barron 2019), then it could all start to look a bit uncontrolled for your average researcher. They might be concerned that the outcome of involvement is purely due to chance – something researchers would be very worried about in relation to their results. But involvement is not research, so somehow researchers need to feel more confident and relaxed about the fact that the impact of involvement is often unpredictable and uncertain.
Perhaps it would help to remember the words of wisdom from a researcher who was forced to embrace serendipity – Louis Pasteur. He of ‘oops I accidentally discovered penicillin’ fame, once said:
“Chance favours the prepared mind.” Louis Pasteur
I was reminded of this quote when I read a story about how a group of dementia researchers developed a totally new research project, based on a chance comment made by a family member at a support group meeting. The group was for people affected by a rare form of dementia, where the part of the brain that deals with vision is most affected. One member of the group was telling a story about how their mother-in-law had recently asked them ‘Am I the right way up?’ because she wasn’t sure. This was news to the researcher because previously this form of dementia had only been shown to affect vision, and not balance. On finding out that other group members had had similar experiences, the researchers embarked on a whole new project, with extensive carer and patient involvement, to explore how balance is affected – something they would never have otherwise thought to do.
This example very nicely shows how sometimes the impact of involvement is down to luck – but there’s an awful lot researchers can do to ensure that luck is on their side. One thing I noticed was that these researchers had organised a support group, and were having regular conversations with patients and carers, not necessarily about research. This would no doubt increase their chances of learning something new, and enhance their understanding of patients’ and carers’ concerns.
Then there’s Louis’s point about having a prepared mind. Researchers need to approach involvement with an open mind, to be prepared to learn, perhaps when they least expect it, and perhaps in contrast to their preconceived ideas of what they’re likely to learn. Another researcher in dementia, Georgina Charlesworth, made a similar point when she commented, “In working with people with dementia and their carers… it has been a delight to hear the ideas generated often as ‘throw away’ remarks and ‘asides’ during discussion or tea-break conversations”.
This might be worth considering when supporting and training researchers prior to involvement. Maybe it’s less about ‘how to do it’, and more about ‘how best to prepare researchers’ minds’.
Is it because they look like this?
While putting together a report from a public involvement event, a colleague lamented that the photos were so boring. The camera couldn’t capture the energy and enthusiasm in the room. I wondered if the audio would have been more interesting…
As Duncan and I discuss in our recent article, involvement in research might be best described as a conversation between researchers and the public. It’s what gets said and who learns from it that matters. Thinking about involvement this way leads to different ideas about how to prepare for it, evaluate it and report what happened.
A more accurate picture of what’s actually going on might look like this:
Please send us your views….
Because this is a question we get asked all the time Bec, Derek and I wanted to develop a simple answer. Drawing on the great work from Canada , we have written a short list describing what we see as the key differences between qualitative research and involvement (see table below).
We would love your feedback on whether this list makes sense or could be improved in anyway. We’ll use your comments to produce a final version.
Please comment by 31 March, via this blog or tweet @KristinaStaley2 or @DerekCStewart, #QualitativeandPPI or email: firstname.lastname@example.org
|Qualitative research project
||Involvement in a research project
|Aims to answer a research question
||Aims to help select and refine a research question
|Seeks people’s input as data to answer a research question
||Seeks people’s input to inform and influence decisions about how research is designed, delivered and disseminated
|Researchers have the power to analyse the data in the way they think best
||Patients, the public and researchers share power to make joint decisions about the research based on their combined views
|Generates evidence that may be generally useful
|Generates insight and learning that may be specific to the researchers and patients/public involved and their particular project
|Needs ethical approval
||Does not always need ethical approval (see this guidance produced by INVOLVE and NRES) but does need to reflect ethical practice
|Follows a standard method
||Uses a flexible approach that meets the needs of the people involved
|Seeks views from a representative sample
||Seeks a range of perspectives from people with diverse experiences
|Can be done by one researcher on behalf of the team
||Needs many members of a team to be involved as they could each learn something different from the experience
Bec Hanley, Kristina Staley, Derek Stewart Feb 2019
 Doria et al. (2018) Sharpening the focus: differentiating between focus groups for patient engagement vs. qualitative research. Research Involvement and Engagement, 4:19.
When training patients and the public in involvement…
A patient once told me that doing involvement means ‘turning the pyramid upside down’. It means starting with the patient and then working from there.
That’s the approach we used at The University of Exeter, where I worked with a fabulous team – Andrea Shelley, Emma Cockcroft and Kristin Liabo – to develop a different kind of training for patients and the public. We started with where the patients are at – having experiential knowledge that they might not realise is valuable to researchers, and perhaps being unsure how best to use it.
We purposefully didn’t start at the same place as the research experts who might ask ‘What do patients need to know about research to be involved?’ We didn’t talk about the research cycle, or different kinds of research, or any kind of method. We just talked about what the patients already know and the skills involved in being a critical friend – sharing knowledge constructively to change researchers’ thinking and plans.
We heard from patients who had considerable experience of involvement that they did understand this role, but only after spending some time doing it. A few people described sitting silent in meetings for the first few months while they tried to work out what they were supposed to contribute. We wondered if we could make this clearer at the start, to help people get up to speed more quickly, so that they could go into a meeting with researchers with a sense of what’s expected of them.
That’s not to say that all the technical info isn’t useful. It definitely helps patients understand the context they’re working in, which is also vital to knowing how best to contribute. We suggest our training complements all the excellent training already out there – and perhaps provides a helpful place to start – at the other end of the pyramid.
What is the best approach?
I just finished adding 130+ articles from 2018 to INVOLVE’s online libraries of evidence of impact and good practice and noticed a common concern. It seems many researchers believe that one of the barriers to involving the public is not knowing ‘how’ to do it, often lamenting the lack of guidance and evidence around the best methods to use.
They might be waiting a long time.
I’d argue there aren’t ‘methods’ for involvement. Following a method would mean using the same approach every time and expecting consistent outcomes. For all the reasons I’ve discussed before, involvement doesn’t quite work like that in practice. The best approach depends on the context and especially the preferences of the particular individuals who are involved. For example, while lots of researchers set up advisory groups, some have found this simply isn’t an option for the people they want to involve – people with STDs for example, or young people with drug and alcohol problems.
This issue was perfectly illustrated by Andi Skilton’s experience of involvement at the BRC, Moorfields Eye Hospital. Andi involved a group of people who were deafblind and who all had very different preferences for communication, from lip-reading through to signing on their body. Andi and the lead researcher Mariya Moosajee learnt so much from this experience about how to support the involvement of people from this community, they thought it useful to share with other researchers. They wrote a journal article with suggestions for other researchers in this field.
But the paper came in for some strong criticism after its publication, from a young, deafblind person who thought the recommendations didn’t make sufficient mention of the use of technology. Andi’s group was made up of older people, who grew up before cochlea implants became available. They seemed to have less interest in, or familiarity with the use of IT, and possibly limited access. In the deafblind community, with what might seem to be common needs, there is huge variation among people’s preferences for the approach that would best support their involvement.
So it seems important not to make any assumptions about what approach will work best for whoever gets involved in a research project. The best ‘method’ might simply be to ask those individuals what they need, and to use whatever approach works best for them.
Is that the right question?
Just reflecting on all the stimulating discussions I had at the ‘International Perspectives on Evaluation of Patient & Public Involvement in Research Conference’…
One of the first things the participants were asked was whether the impact of involvement should be measured. The audience was split pretty much 50:50 I seem to remember. I wondered if this would change over the two days – but my overall sense was that the ‘measurement’ and the ‘non-measurement’ people went off into different rooms, choosing the talks that fitted best with the ideas they already came with…
It feels like the field is polarised and a bit stuck in this binary decision. This always prompts me to think we must be asking the wrong question. Dave Green, a patient contributor, really cut through it all when he said what really matters is whether involvement achieves culture change, to generate more relevant research and change the way research gets done.
Isn’t this the question the PPI community should be asking itself? If involvement was genuinely delivering the outcomes we’d like to see, what would that look like? Research that’s actually useful to the public and improving their lives? If that’s what we want, then how do we know if that’s happening? That might mean measuring some things, but it might not. The choice of method would need to be fit for purpose. And as always, it might simply mean asking the patients…
What changes when we think about involvement as learning?
Public involvement in research has long been described as a process of two-way learning. Focusing on this learning, and in particular what researchers learn from others’ experiential knowledge, suggests a different way to think about the ‘why’ and the ‘how’, as well as how to evaluate and report involvement.
Read more in this series of blogs:
Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?
Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.
Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.
Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…
Blog post #5: Why try and objectify PPI? What gets lost in the process?
Blog post #6: Evaluating the impact of involvement. Tales of the unexpected?