“Why are photos of involvement so boring?”

31047539 - businesswoman making presentation to office colleagues

 

Is it because they look like this?

While putting together a report from a public involvement event, a colleague lamented that the photos were so boring. The camera couldn’t capture the energy and enthusiasm in the room. I wondered if the audio would have been more interesting…

As Duncan and I discuss in our recent article, involvement in research might be best described as a conversation between researchers and the public. It’s what gets said and who learns from it that matters. Thinking about involvement this way leads to different ideas about how to prepare for it, evaluate it and report what happened.

A more accurate picture of what’s actually going on might look like this:

Mind Map

 

 

 

Advertisements

How is involvement in research different from qualitative research?

Board meeting - business concept it is icon .

Please send us your views….

Because this is a question we get asked all the time Bec, Derek and I wanted to develop a simple answer. Drawing on the great work from Canada [1], we have written a short list describing what we see as the key differences between qualitative research and involvement (see table below).

We would love your feedback on whether this list makes sense or could be improved in anyway. We’ll use your comments to produce a final version.

Please comment by 31 March, via this blog or tweet @KristinaStaley2 or @DerekCStewart, #QualitativeandPPI or email: bec.hanley@gmail.com 

Qualitative research project Involvement in a research project
Aims to answer a research question Aims to help select and refine a research question
Seeks people’s input as data to answer a research question Seeks people’s input to inform and influence decisions about how research is designed, delivered and disseminated
Researchers have the power to analyse the data in the way they think best Patients, the public and researchers share power to make joint decisions about the research based on their combined views
Generates evidence that may be generally useful

 

Generates insight and learning that may be specific to the researchers and patients/public involved and their particular project
Needs ethical approval Does not always need ethical approval (see this guidance produced by INVOLVE and NRES) but does need to reflect ethical practice
Follows a standard method Uses a flexible approach that meets the needs of the people involved
Seeks views from a representative sample Seeks a range of perspectives from people with diverse experiences
Can be done by one researcher on behalf of the team Needs many members of a team to be involved as they could each learn something different from the experience

Bec Hanley, Kristina Staley, Derek Stewart Feb 2019

[1] Doria et al. (2018) Sharpening the focus: differentiating between focus groups for patient engagement vs. qualitative research. Research Involvement and Engagement, 4:19.

Turning the pyramid upside down…

pyramids of Giza in Cairo, Egypt.

 

When training patients and the public in involvement…

A patient once told me that doing involvement means ‘turning the pyramid upside down’. It means starting with the patient and then working from there.

That’s the approach we used at The University of Exeter, where I worked with a fabulous team – Andrea Shelley, Emma Cockcroft and Kristin Liabo – to develop a different kind of training for patients and the public. We started with where the patients are at – having experiential knowledge that they might not realise is valuable to researchers, and perhaps being unsure how best to use it.

We purposefully didn’t start at the same place as the research experts who might ask ‘What do patients need to know about research to be involved?’ We didn’t talk about the research cycle, or different kinds of research, or any kind of method. We just talked about what the patients already know and the skills involved in being a critical friend – sharing knowledge constructively to change researchers’ thinking and plans.

We heard from patients who had considerable experience of involvement that they did understand this role, but only after spending some time doing it. A few people described sitting silent in meetings for the first few months while they tried to work out what they were supposed to contribute. We wondered if we could make this clearer at the start, to help people get up to speed more quickly, so that they could go into a meeting with researchers with a sense of what’s expected of them.

That’s not to say that all the technical info isn’t useful. It definitely helps patients understand the context they’re working in, which is also vital to knowing how best to contribute. We suggest our training complements all the excellent training already out there – and perhaps provides a helpful place to start – at the other end of the pyramid.

If there’s no ‘method’ for involvement…

blog 2

What is the best approach?

I just finished adding 130+ articles from 2018 to INVOLVE’s online libraries of evidence of impact and good practice and noticed a common concern. It seems many researchers believe that one of the barriers to involving the public is not knowing ‘how’ to do it, often lamenting the lack of guidance and evidence around the best methods to use.

They might be waiting a long time.

I’d argue there aren’t ‘methods’ for involvement. Following a method would mean using the same approach every time and expecting consistent outcomes. For all the reasons I’ve discussed before, involvement doesn’t quite work like that in practice. The best approach depends on the context and especially the preferences of the particular individuals who are involved. For example, while lots of researchers set up advisory groups, some have found this simply isn’t an option for the people they want to involve – people with STDs for example, or young people with drug and alcohol problems.

This issue was perfectly illustrated by Andi Skilton’s experience of involvement at the BRC, Moorfields Eye Hospital. Andi involved a group of people who were deafblind and who all had very different preferences for communication, from lip-reading through to signing on their body. Andi and the lead researcher Mariya Moosajee learnt so much from this experience about how to support the involvement of people from this community, they thought it useful to share with other researchers. They wrote a journal article with suggestions for other researchers in this field.

But the paper came in for some strong criticism after its publication, from a young, deafblind person who thought the recommendations didn’t make sufficient mention of the use of technology. Andi’s group was made up of older people, who grew up before cochlea implants became available. They seemed to have less interest in, or familiarity with the use of IT, and possibly limited access. In the deafblind community, with what might seem to be common needs, there is huge variation among people’s preferences for the approach that would best support their involvement.

So it seems important not to make any assumptions about what approach will work best for whoever gets involved in a research project. The best ‘method’ might simply be to ask those individuals what they need, and to use whatever approach works best for them.

To measure or not to measure…

Spoof Vector Drawing of The Bard with Yellow-Tinted Glasses

Is that the right question?

Just reflecting on all the stimulating discussions I had at the ‘International Perspectives on Evaluation of Patient & Public Involvement in Research Conference’

One of the first things the participants were asked was whether the impact of involvement should be measured. The audience was split pretty much 50:50 I seem to remember. I wondered if this would change over the two days – but my overall sense was that the ‘measurement’ and the ‘non-measurement’ people went off into different rooms, choosing the talks that fitted best with the ideas they already came with…

It feels like the field is polarised and a bit stuck in this binary decision. This always prompts me to think we must be asking the wrong question. Dave Green, a patient contributor, really cut through it all when he said what really matters is whether involvement achieves culture change, to generate more relevant research and change the way research gets done.

Isn’t this the question the PPI community should be asking itself? If involvement was genuinely delivering the outcomes we’d like to see, what would that look like? Research that’s actually useful to the public and improving their lives? If that’s what we want, then how do we know if that’s happening? That might mean measuring some things, but it might not. The choice of method would need to be fit for purpose. And as always, it might simply mean asking the patients…

Time for a new direction?

14372132_s

What changes when we think about involvement as learning?

Public involvement in research has long been described as a process of two-way learning.  Focusing on this learning, and in particular what researchers learn from others’ experiential knowledge, suggests a different way to think about the ‘why’ and the ‘how’, as well as how to evaluate and report involvement.

Read more in this series of blogs:

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

Blog post #5: Why try and objectify PPI? What gets lost in the process?

Blog post #6: Evaluating the impact of involvement. Tales of the unexpected?

 

 

Evaluating the impact of involvement

what surprised businessman

Tales of the unexpected?

There’s a lot to discuss around evaluating involvement and I’m looking forward to some excellent conversations next week at the Evaluation Conference in Newcastle.

One of the questions I suggest needs addressing is ‘How do we capture the unexpected?’ The way that involvement makes a difference is often a surprise, particularly for researchers.

For example, one researcher I spoke to about his project on Parkinson’s disease, told me how he took his patient information sheet to a panel of patients and carers expecting they would help make the information easier to understand. But the panel said ‘No you’re fine. This information is really well written. We don’t have any suggested changes for the wording, but we do have a worry that you’re planning to interview people on the phone. With Parkinson’s, some people’s voices become very weak which would make that difficult. Could you send them a survey instead or perhaps interview them in another way?’

Some of the guidance suggests that if the researcher starts out with a clear purpose for the involvement then they can put together a plan to evaluate it. That wouldn’t have worked in the example above – the outcome was nothing like what the researcher anticipated. He could have planned to evaluate the wrong thing. But he did learn something very useful and relevant to his research. He did change his method as a result.

I’m wondering if it might help to start with a different question. Rather than ‘How can I prove the involvement made a difference? it might be something like ‘How do I capture those moments that lead to change?’

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

Blog post #5: Why try and objectify PPI? What gets lost in the process?