Is the impact of involvement…

dice for gambling

… all down to luck?

If there aren’t methods for involvement and a representative sample of patients/ public isn’t required (see Staley & Barron 2019), then it could all start to look a bit uncontrolled for your average researcher. They might be concerned that the outcome of involvement is purely due to chance – something researchers would be very worried about in relation to their results. But involvement is not research, so somehow researchers need to feel more confident and relaxed about the fact that the impact of involvement is often unpredictable and uncertain.

Perhaps it would help to remember the words of wisdom from a researcher who was forced to embrace serendipity – Louis Pasteur. He of ‘oops I accidentally discovered penicillin’ fame, once said:

“Chance favours the prepared mind.” Louis Pasteur

I was reminded of this quote when I read a story about how a group of dementia researchers developed a totally new research project, based on a chance comment made by a family member at a support group meeting. The group was for people affected by a rare form of dementia, where the part of the brain that deals with vision is most affected. One member of the group was telling a story about how their mother-in-law had recently asked them ‘Am I the right way up?’ because she wasn’t sure. This was news to the researcher because previously this form of dementia had only been shown to affect vision, and not balance. On finding out that other group members had had similar experiences, the researchers embarked on a whole new project, with extensive carer and patient involvement, to explore how balance is affected – something they would never have otherwise thought to do.

This example very nicely shows how sometimes the impact of involvement is down to luck – but there’s an awful lot researchers can do to ensure that luck is on their side. One thing I noticed was that these researchers had organised a support group, and were having regular conversations with patients and carers, not necessarily about research. This would no doubt increase their chances of learning something new, and enhance their understanding of patients’ and carers’ concerns.

Then there’s Louis’s point about having a prepared mind. Researchers need to approach involvement with an open mind, to be prepared to learn, perhaps when they least expect it, and perhaps in contrast to their preconceived ideas of what they’re likely to learn. Another researcher in dementia, Georgina Charlesworth, made a similar point when she commented, “In working with people with dementia and their carers… it has been a delight to hear the ideas generated often as ‘throw away’ remarks and ‘asides’ during discussion or tea-break conversations”.

This might be worth considering when supporting and training researchers prior to involvement. Maybe it’s less about ‘how to do it’, and more about ‘how best to prepare researchers’ minds’.




“Why are photos of involvement so boring?”

31047539 - businesswoman making presentation to office colleagues


Is it because they look like this?

While putting together a report from a public involvement event, a colleague lamented that the photos were so boring. The camera couldn’t capture the energy and enthusiasm in the room. I wondered if the audio would have been more interesting…

As Duncan and I discuss in our recent article, involvement in research might be best described as a conversation between researchers and the public. It’s what gets said and who learns from it that matters. Thinking about involvement this way leads to different ideas about how to prepare for it, evaluate it and report what happened.

A more accurate picture of what’s actually going on might look like this:

Mind Map




How is involvement in research different from qualitative research?

Board meeting - business concept it is icon .

Please send us your views….

Because this is a question we get asked all the time Bec, Derek and I wanted to develop a simple answer. Drawing on the great work from Canada [1], we have written a short list describing what we see as the key differences between qualitative research and involvement (see table below).

We would love your feedback on whether this list makes sense or could be improved in anyway. We’ll use your comments to produce a final version.

Please comment by 31 March, via this blog or tweet @KristinaStaley2 or @DerekCStewart, #QualitativeandPPI or email: 

Qualitative research project Involvement in a research project
Aims to answer a research question Aims to help select and refine a research question
Seeks people’s input as data to answer a research question Seeks people’s input to inform and influence decisions about how research is designed, delivered and disseminated
Researchers have the power to analyse the data in the way they think best Patients, the public and researchers share power to make joint decisions about the research based on their combined views
Generates evidence that may be generally useful


Generates insight and learning that may be specific to the researchers and patients/public involved and their particular project
Needs ethical approval Does not always need ethical approval (see this guidance produced by INVOLVE and NRES) but does need to reflect ethical practice
Follows a standard method Uses a flexible approach that meets the needs of the people involved
Seeks views from a representative sample Seeks a range of perspectives from people with diverse experiences
Can be done by one researcher on behalf of the team Needs many members of a team to be involved as they could each learn something different from the experience

Bec Hanley, Kristina Staley, Derek Stewart Feb 2019

[1] Doria et al. (2018) Sharpening the focus: differentiating between focus groups for patient engagement vs. qualitative research. Research Involvement and Engagement, 4:19.

Turning the pyramid upside down…

pyramids of Giza in Cairo, Egypt.


When training patients and the public in involvement…

A patient once told me that doing involvement means ‘turning the pyramid upside down’. It means starting with the patient and then working from there.

That’s the approach we used at The University of Exeter, where I worked with a fabulous team – Andrea Shelley, Emma Cockcroft and Kristin Liabo – to develop a different kind of training for patients and the public. We started with where the patients are at – having experiential knowledge that they might not realise is valuable to researchers, and perhaps being unsure how best to use it.

We purposefully didn’t start at the same place as the research experts who might ask ‘What do patients need to know about research to be involved?’ We didn’t talk about the research cycle, or different kinds of research, or any kind of method. We just talked about what the patients already know and the skills involved in being a critical friend – sharing knowledge constructively to change researchers’ thinking and plans.

We heard from patients who had considerable experience of involvement that they did understand this role, but only after spending some time doing it. A few people described sitting silent in meetings for the first few months while they tried to work out what they were supposed to contribute. We wondered if we could make this clearer at the start, to help people get up to speed more quickly, so that they could go into a meeting with researchers with a sense of what’s expected of them.

That’s not to say that all the technical info isn’t useful. It definitely helps patients understand the context they’re working in, which is also vital to knowing how best to contribute. We suggest our training complements all the excellent training already out there – and perhaps provides a helpful place to start – at the other end of the pyramid.

If there’s no ‘method’ for involvement…

blog 2

What is the best approach?

I just finished adding 130+ articles from 2018 to INVOLVE’s online libraries of evidence of impact and good practice and noticed a common concern. It seems many researchers believe that one of the barriers to involving the public is not knowing ‘how’ to do it, often lamenting the lack of guidance and evidence around the best methods to use.

They might be waiting a long time.

I’d argue there aren’t ‘methods’ for involvement. Following a method would mean using the same approach every time and expecting consistent outcomes. For all the reasons I’ve discussed before, involvement doesn’t quite work like that in practice. The best approach depends on the context and especially the preferences of the particular individuals who are involved. For example, while lots of researchers set up advisory groups, some have found this simply isn’t an option for the people they want to involve – people with STDs for example, or young people with drug and alcohol problems.

This issue was perfectly illustrated by Andi Skilton’s experience of involvement at the BRC, Moorfields Eye Hospital. Andi involved a group of people who were deafblind and who all had very different preferences for communication, from lip-reading through to signing on their body. Andi and the lead researcher Mariya Moosajee learnt so much from this experience about how to support the involvement of people from this community, they thought it useful to share with other researchers. They wrote a journal article with suggestions for other researchers in this field.

But the paper came in for some strong criticism after its publication, from a young, deafblind person who thought the recommendations didn’t make sufficient mention of the use of technology. Andi’s group was made up of older people, who grew up before cochlea implants became available. They seemed to have less interest in, or familiarity with the use of IT, and possibly limited access. In the deafblind community, with what might seem to be common needs, there is huge variation among people’s preferences for the approach that would best support their involvement.

So it seems important not to make any assumptions about what approach will work best for whoever gets involved in a research project. The best ‘method’ might simply be to ask those individuals what they need, and to use whatever approach works best for them.

To measure or not to measure…

Spoof Vector Drawing of The Bard with Yellow-Tinted Glasses

Is that the right question?

Just reflecting on all the stimulating discussions I had at the ‘International Perspectives on Evaluation of Patient & Public Involvement in Research Conference’

One of the first things the participants were asked was whether the impact of involvement should be measured. The audience was split pretty much 50:50 I seem to remember. I wondered if this would change over the two days – but my overall sense was that the ‘measurement’ and the ‘non-measurement’ people went off into different rooms, choosing the talks that fitted best with the ideas they already came with…

It feels like the field is polarised and a bit stuck in this binary decision. This always prompts me to think we must be asking the wrong question. Dave Green, a patient contributor, really cut through it all when he said what really matters is whether involvement achieves culture change, to generate more relevant research and change the way research gets done.

Isn’t this the question the PPI community should be asking itself? If involvement was genuinely delivering the outcomes we’d like to see, what would that look like? Research that’s actually useful to the public and improving their lives? If that’s what we want, then how do we know if that’s happening? That might mean measuring some things, but it might not. The choice of method would need to be fit for purpose. And as always, it might simply mean asking the patients…

Time for a new direction?


What changes when we think about involvement as learning?

Public involvement in research has long been described as a process of two-way learning.  Focusing on this learning, and in particular what researchers learn from others’ experiential knowledge, suggests a different way to think about the ‘why’ and the ‘how’, as well as how to evaluate and report involvement.

Read more in this series of blogs:

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

Blog post #5: Why try and objectify PPI? What gets lost in the process?

Blog post #6: Evaluating the impact of involvement. Tales of the unexpected?



Evaluating the impact of involvement

what surprised businessman

Tales of the unexpected?

There’s a lot to discuss around evaluating involvement and I’m looking forward to some excellent conversations next week at the Evaluation Conference in Newcastle.

One of the questions I suggest needs addressing is ‘How do we capture the unexpected?’ The way that involvement makes a difference is often a surprise, particularly for researchers.

For example, one researcher I spoke to about his project on Parkinson’s disease, told me how he took his patient information sheet to a panel of patients and carers expecting they would help make the information easier to understand. But the panel said ‘No you’re fine. This information is really well written. We don’t have any suggested changes for the wording, but we do have a worry that you’re planning to interview people on the phone. With Parkinson’s, some people’s voices become very weak which would make that difficult. Could you send them a survey instead or perhaps interview them in another way?’

Some of the guidance suggests that if the researcher starts out with a clear purpose for the involvement then they can put together a plan to evaluate it. That wouldn’t have worked in the example above – the outcome was nothing like what the researcher anticipated. He could have planned to evaluate the wrong thing. But he did learn something very useful and relevant to his research. He did change his method as a result.

I’m wondering if it might help to start with a different question. Rather than ‘How can I prove the involvement made a difference? it might be something like ‘How do I capture those moments that lead to change?’

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

Blog post #5: Why try and objectify PPI? What gets lost in the process?

Why try and objectify PPI?


What gets lost in the process?

If I was to tell the real story of how my best research results came about when I was a molecular biologist, it would go something like this…

One day I made a big mistake in setting up my experiment and got some totally unexpected results. These were more interesting than what I was looking for in the first place. So I did the flawed experiment again, just to check it was true and got the same interesting findings. Then I went to talk to my boss to tell him what had happened. This wasn’t so shameful, as I could talk about the cool results as well as owning up to my mistake. Together we came up with more ideas for different experiments to look more closely at the new findings. My team mates also gave me helpful feedback. When I presented these findings at a conference, other experts in the field gave me ideas for even more experiments, and one of the leaders in the field even offered me a job. When it came to writing a journal article reporting this work I didn’t mention any of this!! Instead I summarised the previous literature on the topic, and made out there was a completely logical and well-thought out rationale for everything I’d done….

I’m not saying all research is done this way! But now I recognise there was a whole host of people I talked to along the way – people who acted as a sounding board to check my thinking, gave me new ideas and helped me find solutions when things went wrong. I couldn’t have done it without them. But these people didn’t get a single mention in any of my publications. I wasn’t expected to write about them and there wasn’t a standard way to do this. There still isn’t.

Current guidance on how to report involvement and its impact seems to fit this same pattern. In aiming to make the ‘findings objective and robust’, these reports leave out the researchers’ and public’s experience and learning. I think this means vital information of how involvement works gets lost – it’s like banging a square peg into a round hole! Perhaps we need a new way to report the impact of involvement in research – not one that tries to describe it as a rational and objective process, but one that recognises the subjective and surprising experience of learning. We need to report the content of the conversations – what was said, what was learnt by whom, and what difference this made to those people’s thinking and actions…. so it’s a story of what happened to the people, rather than objective data.

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

What is the purpose of involvement?

Good Listening Words Diagram Flowchart Learn How to Retain Learn

To avoid bias in researchers’ thinking…

Lots of discussion this week about how researchers are often unclear about the purpose of involvement in the context of their own research, and so remain uncertain about what to do and how to do it.

I think this might be because the purpose of involvement – the reasons for doing it – are often described in very general terms. Firstly some people say it’s the right thing to do. There is a moral purpose in enabling patients, carers and the public to have their say in research decisions that will have impact on their lives. The problem with this, is that it doesn’t say who needs to be involved and what they should do – so it does leave researchers uncertain about the ‘how’.

Another common reason for involvement is that it will improve the quality of research by making it more relevant and genuinely useful to the end users. This puts an emphasis on the end goal, and again researchers can remain unclear about precisely how to achieve that. Current guidance and best practice advice tends to suggest that patients/ the public should be involved at all stages of research. It describes the impacts involvement can have at each stage, but doesn’t always say how that needs to look in practice.

I’d like to suggest another purpose for involvement which may help with the ‘how’. It’s for researchers to learn from other people’s experiential knowledge and avoid bias in their own thinking. With this understanding, researchers should be checking in with patients/ the public in every decision they make about their research, to ask ‘Am I missing anything? Have I assumed anything? Am I on the right lines?’

Researchers do this all the time in the conversations they have continually with their peers. They know how to sound out new ideas in an informal chat over coffee. They know how to hold formal meetings to make major decisions. So I’d argue researchers do know ‘how’ to do involvement – it’s just more of the same. I think they’re not making the connection between learning from conversations with patients/ the public and what they do already. If the purpose of involvement is simply to have a conversation, learn from each other and make a joint decision – how many ways are there to do that?

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.