Evaluating the impact of involvement

what surprised businessman

Tales of the unexpected?

There’s a lot to discuss around evaluating involvement and I’m looking forward to some excellent conversations next week at the Evaluation Conference in Newcastle.

One of the questions I suggest needs addressing is ‘How do we capture the unexpected?’ The way that involvement makes a difference is often a surprise, particularly for researchers.

For example, one researcher I spoke to about his project on Parkinson’s disease, told me how he took his patient information sheet to a panel of patients and carers expecting they would help make the information easier to understand. But the panel said ‘No you’re fine. This information is really well written. We don’t have any suggested changes for the wording, but we do have a worry that you’re planning to interview people on the phone. With Parkinson’s, some people’s voices become very weak which would make that difficult. Could you send them a survey instead or perhaps interview them in another way?’

Some of the guidance suggests that if the researcher starts out with a clear purpose for the involvement then they can put together a plan to evaluate it. That wouldn’t have worked in the example above – the outcome was nothing like what the researcher anticipated. He could have planned to evaluate the wrong thing. But he did learn something very useful and relevant to his research. He did change his method as a result.

I’m wondering if it might help to start with a different question. Rather than ‘How can I prove the involvement made a difference? it might be something like ‘How do I capture those moments that lead to change?’

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

Blog post #5: Why try and objectify PPI? What gets lost in the process?

Why try and objectify PPI?


What gets lost in the process?

If I was to tell the real story of how my best research results came about when I was a molecular biologist, it would go something like this…

One day I made a big mistake in setting up my experiment and got some totally unexpected results. These were more interesting than what I was looking for in the first place. So I did the flawed experiment again, just to check it was true and got the same interesting findings. Then I went to talk to my boss to tell him what had happened. This wasn’t so shameful, as I could talk about the cool results as well as owning up to my mistake. Together we came up with more ideas for different experiments to look more closely at the new findings. My team mates also gave me helpful feedback. When I presented these findings at a conference, other experts in the field gave me ideas for even more experiments, and one of the leaders in the field even offered me a job. When it came to writing a journal article reporting this work I didn’t mention any of this!! Instead I summarised the previous literature on the topic, and made out there was a completely logical and well-thought out rationale for everything I’d done….

I’m not saying all research is done this way! But now I recognise there was a whole host of people I talked to along the way – people who acted as a sounding board to check my thinking, gave me new ideas and helped me find solutions when things went wrong. I couldn’t have done it without them. But these people didn’t get a single mention in any of my publications. I wasn’t expected to write about them and there wasn’t a standard way to do this. There still isn’t.

Current guidance on how to report involvement and its impact seems to fit this same pattern. In aiming to make the ‘findings objective and robust’, these reports leave out the researchers’ and public’s experience and learning. I think this means vital information of how involvement works gets lost – it’s like banging a square peg into a round hole! Perhaps we need a new way to report the impact of involvement in research – not one that tries to describe it as a rational and objective process, but one that recognises the subjective and surprising experience of learning. We need to report the content of the conversations – what was said, what was learnt by whom, and what difference this made to those people’s thinking and actions…. so it’s a story of what happened to the people, rather than objective data.

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.

Blog post #4: What is the purpose of involvement? To avoid bias in researchers’ thinking…

What is the purpose of involvement?

Good Listening Words Diagram Flowchart Learn How to Retain Learn

To avoid bias in researchers’ thinking…

Lots of discussion this week about how researchers are often unclear about the purpose of involvement in the context of their own research, and so remain uncertain about what to do and how to do it.

I think this might be because the purpose of involvement – the reasons for doing it – are often described in very general terms. Firstly some people say it’s the right thing to do. There is a moral purpose in enabling patients, carers and the public to have their say in research decisions that will have impact on their lives. The problem with this, is that it doesn’t say who needs to be involved and what they should do – so it does leave researchers uncertain about the ‘how’.

Another common reason for involvement is that it will improve the quality of research by making it more relevant and genuinely useful to the end users. This puts an emphasis on the end goal, and again researchers can remain unclear about precisely how to achieve that. Current guidance and best practice advice tends to suggest that patients/ the public should be involved at all stages of research. It describes the impacts involvement can have at each stage, but doesn’t always say how that needs to look in practice.

I’d like to suggest another purpose for involvement which may help with the ‘how’. It’s for researchers to learn from other people’s experiential knowledge and avoid bias in their own thinking. With this understanding, researchers should be checking in with patients/ the public in every decision they make about their research, to ask ‘Am I missing anything? Have I assumed anything? Am I on the right lines?’

Researchers do this all the time in the conversations they have continually with their peers. They know how to sound out new ideas in an informal chat over coffee. They know how to hold formal meetings to make major decisions. So I’d argue researchers do know ‘how’ to do involvement – it’s just more of the same. I think they’re not making the connection between learning from conversations with patients/ the public and what they do already. If the purpose of involvement is simply to have a conversation, learn from each other and make a joint decision – how many ways are there to do that?

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Blog post #3: Conflict as thinking. The challenge in working with different kinds of ‘thinking partners’.




Conflict as thinking…

w vs m

The challenge in working with different kinds of ‘thinking partners’…

I recently read the excellent report from the Alzheimer’s Society on their evaluation of their research network. One quote that leapt out for me was from a carer who said:

Sometimes involvement can feel like conflict even though everyone is on the same side. Everyone wants good quality research to generate evidence that can change people’s lives for the better. But because researchers and patients come at this from different angles, it can feel like there’s a disagreement. Carer (anon)

For me this encapsulates one of the challenges for researchers in working with patient/public ‘thinking partners’. But I’d suggest this is good, healthy conflict. It’s what helps to sharpen ideas and make better decisions. It’s ‘conflict as thinking’, as Margaret Heffernan described in a TED talk in 2012. She’s concerned that people often try to avoid conflict because it makes us so uncomfortable. She says we need to dare to disagree to get the benefits. She sees conflict as:

… a fantastic model of collaboration. It’s about working with thinking partners who aren’t echo chambers. I wonder how many of us have, or dare to have, such collaborators…It requires that we find people who are very different from ourselves. That means we have to resist the natural drive to find people mostly like us, and it means we have to seek out people with different backgrounds, with different ways of thinking and different experience, and find ways to engage with them. That requires a lot of patience and a lot of energy.”
Margaret Heffernan

The value of researchers working with patient/public thinking partners is that their knowledge and experiences are so very different – but engaging in this kind of constructive conflict is difficult. What I sometimes see is researchers becoming defensive of their ideas and resistant to change, or sometimes overly deferential to patients/ the public, and reluctant to be critical of others’ contributions. When involvement is understood as a conversation that supports two-way learning, then sharing ideas and working through disagreements is a crucial part of the process. I wonder what more could be done to prepare everyone involved to listen and learn from different perspectives? Would it help if people simply saw conflict as good thinking…

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

Blog post #2: Researchers and the public as ‘thinking partners’. Why there’s no ‘method’ for involvement.

Researchers and the public as ‘thinking partners’…


talking and thinking

Why there’s no ‘method’ for involvement

At its most basic, involvement is a conversation between researchers, patients, carers and the public, that leads to two-way learning and ultimately to better decisions and new ideas. It’s just talking and thinking. They’re ‘thinking partners’.

But researchers often ask me ‘What’s the best method for involvement?’ My answer is ‘There isn’t one’. A method is a fixed set of steps that researchers follow that will give the same answer, no matter who follows them. That’s not true of involvement. It absolutely matters which researcher does the involvement, because what each researcher learns from the experience will be different. It all depends on what that particular researcher doesn’t know at the start, and what assumptions he/she has made.

When I look back to when I was a wee molecular biologist, I realise I spent a lot of my time just talking and thinking, only some of my time was spent doing the test tube stuff. I had very many ‘thinking partners’ including my supervisors, my team mates, other people in my department, people in other parts of the university, people in other universities, researchers in the UK and researchers in other countries. If I was doing it now, patients and carers would be included in that group.

My sense is that all these ‘thinking partners’ do similar things – they help the researcher to come up with new ideas, to avoid potential pitfalls, to solve problems, to make better informed choices about the direction of research and to help make sense of the findings. Sometimes the most important thing they do is to confirm that all is well.

So this way of working is not new for researchers. It is commonplace and every day. But researchers never seem to ask ‘What’s the best method for me to talk with my colleagues?’ In fact, you can do it in all sorts of ways, including:

  • In one-to-one meetings in an office
  • Over coffee in a café
  • In the tea room at work
  • In formal team meetings
  • In informal team meetings
  • As part of a Steering group
  • In departmental talks
  • At conferences – in the lunch queue, at a poster, at a talk, in a panel session, in the bar
  • And very often in the pub on a Friday night

I think all of these could work equally well in learning from patient, carer and public thinking partners! If involvement is just more of the same, then questions about how to do it are much easier to answer. It all depends on the conversation you want to have and the people you want to talk to – no fixed steps, just talking and thinking.

Blog post #1: PPI. Learning it is. What can Yoda teach us about involvement in research?

PPI. Learning it is.


What can Yoda teach us about public involvement in research?

Turns out, I have a favourite Yoda quote and it’s this:

“Always pass on what you have learned.”– Yoda

For me, learning through conversations with other people is what involvement is all about. If I was going to get all hippy on you, I’d say it’s what life’s about and fundamental to everything that people do…

Very specifically, I’d suggest that what researchers learn from involvement, from their conversations with patients, carers and the public, often leads to the impacts on research which are widely known – but far too often what the researcher learns isn’t passed on. It’s not captured or properly described. It can be easily dismissed as ‘anecdotal’.

For decades, PPI people have been on a quest for the holy grail – ‘a tool to measure impact’ – but nobody’s quite managed it. Why is that? I’d argue it’s because we’ve been looking at involvement all the wrong way. Maybe it’s time to change our thinking and listen to Yoda.

When involvement is understood as learning, then very different questions (and answers) emerge around how to do it, why do it, who to involve, what difference it makes, and how to report it.

This is the first of six blogs in which explore the implications of understanding involvement as learning. I am hoping to pass on what I’ve learned from the many conversations I’ve had with lots of brilliant people. A big thank-you to all of them for sharing their ideas, knowledge, expertise and experience. I always learn something new from each and every conversation, so I hope these posts will be the start of many more.

How is involvement different to qualitative research?

31289838 - group of business people meeting

Let me count the ways…

Last week I gave a couple of talks about involvement to researchers in Denmark. In the Q&A session, one researcher commented, “There is nothing new here. It’s just qualitative research. We’ve been doing that for years”. I’ve had this comment before and never felt I had a good enough answer…

Now I can point people to the excellent paper that just came out on this topic from Canada, which describes the many practical differences between these two activities.

But I wonder if there is another distinction that could be added to the list… For me it comes down to whose knowledge is being improved. With qualitative research the aim is to contribute to general knowledge and awareness adding to a body of evidence. By way of contrast, involvement often improves the knowledge of a specific individual.

A good example of this comes from a published report of the impact of involvement on a funding bid [1]. The researchers in this example consulted a PPI group about a project on the social and financial implications of carpal tunnel syndrome. A patient in the group explained how she had lost her job taking blood samples, because she had lost her fine finger movement through carpel tunnel syndrome. This was a revelation to these researchers! They hadn’t considered people’s working lives in their proposal. So they changed it and ultimately the project got funded.

Such an insight could have come from qualitative research and could have already been published as data. There must be countless research publications that describe how health conditions generally impact on work. But the point in this example was that these particular researchers didn’t know that. It was a specific gap in their knowledge and understanding. Other researchers may not have had this gap, and would have come to the group with a different proposal.

One conclusion from this, is that involvement can’t be carried out in the same way as qualitative research, where one person in the team ‘does the PPI’, and then reports back with the ‘results’. Every researcher needs to work with patient and carers, because each one might have different gaps in their knowledge and awareness. This also means that different members of a research team might find that the impact of involvement is not the same…

[1] Carter et al. (2012) Mobilising the experiential knowledge of clinicians, patients and carers for applied health-care research. Contemporary Social Science, 8:3, 307-320.