Turning the pyramid upside down…

pyramids of Giza in Cairo, Egypt.

 

When training patients and the public in involvement…

A patient once told me that doing involvement means ‘turning the pyramid upside down’. It means starting with the patient and then working from there.

That’s the approach we used at The University of Exeter, where I worked with a fabulous team – Andrea Shelley, Emma Cockcroft and Kristin Liabo – to develop a different kind of training for patients and the public. We started with where the patients are at – having experiential knowledge that they might not realise is valuable to researchers, and perhaps being unsure how best to use it.

We purposefully didn’t start at the same place as the research experts who might ask ‘What do patients need to know about research to be involved?’ We didn’t talk about the research cycle, or different kinds of research, or any kind of method. We just talked about what the patients already know and the skills involved in being a critical friend – sharing knowledge constructively to change researchers’ thinking and plans.

We heard from patients who had considerable experience of involvement that they did understand this role, but only after spending some time doing it. A few people described sitting silent in meetings for the first few months while they tried to work out what they were supposed to contribute. We wondered if we could make this clearer at the start, to help people get up to speed more quickly, so that they could go into a meeting with researchers with a sense of what’s expected of them.

That’s not to say that all the technical info isn’t useful. It definitely helps patients understand the context they’re working in, which is also vital to knowing how best to contribute. We suggest our training complements all the excellent training already out there – and perhaps provides a helpful place to start – at the other end of the pyramid.

Advertisements

If there’s no ‘method’ for involvement…

blog 2

What is the best approach?

I just finished adding 130+ articles from 2018 to INVOLVE’s online libraries of evidence of impact and good practice and noticed a common concern. It seems many researchers believe that one of the barriers to involving the public is not knowing ‘how’ to do it, often lamenting the lack of guidance and evidence around the best methods to use.

They might be waiting a long time.

I’d argue there aren’t ‘methods’ for involvement. Following a method would mean using the same approach every time and expecting consistent outcomes. For all the reasons I’ve discussed before, involvement doesn’t quite work like that in practice. The best approach depends on the context and especially the preferences of the particular individuals who are involved. For example, while lots of researchers set up advisory groups, some have found this simply isn’t an option for the people they want to involve – people with STDs for example, or young people with drug and alcohol problems.

This issue was perfectly illustrated by Andi Skilton’s experience of involvement at the BRC, Moorfields Eye Hospital. Andi involved a group of people who were deafblind and who all had very different preferences for communication, from lip-reading through to signing on their body. Andi and the lead researcher Mariya Moosajee learnt so much from this experience about how to support the involvement of people from this community, they thought it useful to share with other researchers. They wrote a journal article with suggestions for other researchers in this field.

But the paper came in for some strong criticism after its publication, from a young, deafblind person who thought the recommendations didn’t make sufficient mention of the use of technology. Andi’s group was made up of older people, who grew up before cochlea implants became available. They seemed to have less interest in, or familiarity with the use of IT, and possibly limited access. In the deafblind community, with what might seem to be common needs, there is huge variation among people’s preferences for the approach that would best support their involvement.

So it seems important not to make any assumptions about what approach will work best for whoever gets involved in a research project. The best ‘method’ might simply be to ask those individuals what they need, and to use whatever approach works best for them.

To measure or not to measure…

Spoof Vector Drawing of The Bard with Yellow-Tinted Glasses

Is that the right question?

Just reflecting on all the stimulating discussions I had at the ‘International Perspectives on Evaluation of Patient & Public Involvement in Research Conference’

One of the first things the participants were asked was whether the impact of involvement should be measured. The audience was split pretty much 50:50 I seem to remember. I wondered if this would change over the two days – but my overall sense was that the ‘measurement’ and the ‘non-measurement’ people went off into different rooms, choosing the talks that fitted best with the ideas they already came with…

It feels like the field is polarised and a bit stuck in this binary decision. This always prompts me to think we must be asking the wrong question. Dave Green, a patient contributor, really cut through it all when he said what really matters is whether involvement achieves culture change, to generate more relevant research and change the way research gets done.

Isn’t this the question the PPI community should be asking itself? If involvement was genuinely delivering the outcomes we’d like to see, what would that look like? Research that’s actually useful to the public and improving their lives? If that’s what we want, then how do we know if that’s happening? That might mean measuring some things, but it might not. The choice of method would need to be fit for purpose. And as always, it might simply mean asking the patients…

Time for a new direction?

14372132_s

What changes when we think about involvement as learning?

Public involvement in research has long been described as a process of two-way learning.  Focusing on this learning, and in particular what researchers learn from others’ experiential knowledge, suggests a different way to think about the ‘why’ and the ‘how’, as well as how to evaluate and report involvement.

Read more in this series of blogs:

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

Blog post #5: Why try and objectify PPI? What gets lost in the process?

Blog post #6: Evaluating the impact of involvement. Tales of the unexpected?

 

 

Evaluating the impact of involvement

what surprised businessman

Tales of the unexpected?

There’s a lot to discuss around evaluating involvement and I’m looking forward to some excellent conversations next week at the Evaluation Conference in Newcastle.

One of the questions I suggest needs addressing is ‘How do we capture the unexpected?’ The way that involvement makes a difference is often a surprise, particularly for researchers.

For example, one researcher I spoke to about his project on Parkinson’s disease, told me how he took his patient information sheet to a panel of patients and carers expecting they would help make the information easier to understand. But the panel said ‘No you’re fine. This information is really well written. We don’t have any suggested changes for the wording, but we do have a worry that you’re planning to interview people on the phone. With Parkinson’s, some people’s voices become very weak which would make that difficult. Could you send them a survey instead or perhaps interview them in another way?’

Some of the guidance suggests that if the researcher starts out with a clear purpose for the involvement then they can put together a plan to evaluate it. That wouldn’t have worked in the example above – the outcome was nothing like what the researcher anticipated. He could have planned to evaluate the wrong thing. But he did learn something very useful and relevant to his research. He did change his method as a result.

I’m wondering if it might help to start with a different question. Rather than ‘How can I prove the involvement made a difference? it might be something like ‘How do I capture those moments that lead to change?’

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

Blog post #5: Why try and objectify PPI? What gets lost in the process?

Why try and objectify PPI?

hammer

What gets lost in the process?

If I was to tell the real story of how my best research results came about when I was a molecular biologist, it would go something like this…

One day I made a big mistake in setting up my experiment and got some totally unexpected results. These were more interesting than what I was looking for in the first place. So I did the flawed experiment again, just to check it was true and got the same interesting findings. Then I went to talk to my boss to tell him what had happened. This wasn’t so shameful, as I could talk about the cool results as well as owning up to my mistake. Together we came up with more ideas for different experiments to look more closely at the new findings. My team mates also gave me helpful feedback. When I presented these findings at a conference, other experts in the field gave me ideas for even more experiments, and one of the leaders in the field even offered me a job. When it came to writing a journal article reporting this work I didn’t mention any of this!! Instead I summarised the previous literature on the topic, and made out there was a completely logical and well-thought out rationale for everything I’d done….

I’m not saying all research is done this way! But now I recognise there was a whole host of people I talked to along the way – people who acted as a sounding board to check my thinking, gave me new ideas and helped me find solutions when things went wrong. I couldn’t have done it without them. But these people didn’t get a single mention in any of my publications. I wasn’t expected to write about them and there wasn’t a standard way to do this. There still isn’t.

Current guidance on how to report involvement and its impact seems to fit this same pattern. In aiming to make the ‘findings objective and robust’, these reports leave out the researchers’ and public’s experience and learning. I think this means vital information of how involvement works gets lost – it’s like banging a square peg into a round hole! Perhaps we need a new way to report the impact of involvement in research – not one that tries to describe it as a rational and objective process, but one that recognises the subjective and surprising experience of learning. We need to report the content of the conversations – what was said, what was learnt by whom, and what difference this made to those people’s thinking and actions…. so it’s a story of what happened to the people, rather than objective data.

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

What is the purpose of involvement?

Good Listening Words Diagram Flowchart Learn How to Retain Learn

To avoid bias in researchers’ thinking…

Lots of discussion this week about how researchers are often unclear about the purpose of involvement in the context of their own research, and so remain uncertain about what to do and how to do it.

I think this might be because the purpose of involvement – the reasons for doing it – are often described in very general terms. Firstly some people say it’s the right thing to do. There is a moral purpose in enabling patients, carers and the public to have their say in research decisions that will have impact on their lives. The problem with this, is that it doesn’t say who needs to be involved and what they should do – so it does leave researchers uncertain about the ‘how’.

Another common reason for involvement is that it will improve the quality of research by making it more relevant and genuinely useful to the end users. This puts an emphasis on the end goal, and again researchers can remain unclear about precisely how to achieve that. Current guidance and best practice advice tends to suggest that patients/ the public should be involved at all stages of research. It describes the impacts involvement can have at each stage, but doesn’t always say how that needs to look in practice.

I’d like to suggest another purpose for involvement which may help with the ‘how’. It’s for researchers to learn from other people’s experiential knowledge and avoid bias in their own thinking. With this understanding, researchers should be checking in with patients/ the public in every decision they make about their research, to ask ‘Am I missing anything? Have I assumed anything? Am I on the right lines?’

Researchers do this all the time in the conversations they have continually with their peers. They know how to sound out new ideas in an informal chat over coffee. They know how to hold formal meetings to make major decisions. So I’d argue researchers do know ‘how’ to do involvement – it’s just more of the same. I think they’re not making the connection between learning from conversations with patients/ the public and what they do already. If the purpose of involvement is simply to have a conversation, learn from each other and make a joint decision – how many ways are there to do that?

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.