As part of our Gaun Yersel! stories, Lisa considers the evidence needed to ensure the focus is on learning how self management works.
‘But what is the evidence for supported self management?’ is a question that, as a researcher, I am often asked. ‘Does supported self management work?’ is another. The evidence gathered from research suggests that yes, it does work. What I’m rarely asked, however, is ‘how does it work?’ and ‘why does it work for some people, or in some situations, and not others?’
Evidence is a powerful catalyst for change. We need evidence to encourage buy-in from our colleagues or managers for change, to support cases for funding, to commission (or decommission) services, to identify that we are meeting peoples’ needs, and to design effective interventions, projects and approaches.
The importance of evidence-based care underpins health and social care practitioners’ training and is a regular topic of discussion for anyone involved in the provision or commissioning of health and social care services. But the usefulness and meaningfulness of the evidence depends on the question being asked.
Randomised Controlled Trials (RCT) and experimental methods are widely considered to be the gold standard in producing robust evidence to underpin complex interventions such as supported self management. As such, the evidence base on supported self management has largely been driven by the question ‘does it work?’ Knowing whether supported self management works or not is important for informing what we do. However, this question only addresses part of the story.
The RCT model – frequently favoured by funders – embraces the notion of supported self management as being a neatly packaged intervention which can be disconnected from the people and the environments surrounding it. However, supported self management is rarely a neatly packaged intervention; it is messy and complex, with multiple components, mechanisms of action, and outcomes.
An intervention like supported self management rarely works because of the intervention itself. It works because the mechanisms of action in the intervention are ‘triggered’ by a unique combination of factors operating both within and outwith the intervention itself. It is this complex combination that creates the ‘right’ conditions for the intervention to work. In an RCT approach, however, supported self management interventions are tested under ‘ideal’, ‘trial-like’ conditions which purposely removes the interaction of other factors that may influence its success or failure. Thus the question ‘does it work?’ overlooks the true complexity of supported self management as a concept. What’s more, it fails to capture valuable learning about the key ingredients that makes it work in practice and insights into its replicability and potential for scaling up in different settings.
Policy makers repeatedly call for us to deliver on the supported self management agenda. The difficulty in doing this is that the current state of the evidence base leaves us with many unanswered questions. It is likely that in different contexts, supported self management will look different and need to be implemented differently from the ‘trial ideal’.
From an implementation perspective, we don’t have a clear sense of the mechanisms through which supported self management interventions work – which specific components and combinations are likely to work best, and in what environments and settings and with whom they are likely to be more successful and replicable.
From a practice point of view, having the evidence to underpin some of these questions could help us to ensure that the supported self management that we offer is tailored to what works best for particular groups of people or in particular settings or environments. Increasingly, we need a new approach that helps us to understand more about the conditions and contexts that makes supported self management work (or not work).
There is no ‘one size fits all’ methodology that will gather all of the evidence needed to underpin supported self management. Different methodologies are needed to address different yet complementary parts of the puzzle. Although the RCT model has its place, what is increasingly being called for by researchers in the field of implementation is ‘complexity-informed approaches’. Such approaches, which encourage different types of evidence in the form of in-depth, mixed method case studies, embrace the idea of understanding and theorising how supported self management works in the context of the complex, chaotic and dynamic environments that comprise our health and social care system and which can be hard to predict.
Understanding more about the ‘conditions’ that help to make supported self management work well, and what stops it working so well, and how these influence the mechanisms of action that lead to specific outcomes is essential for sharpening and shaping how we tailor and provide supported self management in our practice.
Approaches such as Realist Evaluation, which focusses on ‘what it is that makes a programme or intervention work?’ and Contribution Analysis, which aims to understand the links between the multiple factors contributing to success and failure, can help to do this and complements existing evidence. This kind of evidence is important learning and will help to generate meaningful and flexible solutions to addressing the challenges associated with implementing effective supported self management in our time-and resource-stretched health and social care system.
It’s time for the direction of the evidence base to shift; how and when will funders get on board?
Lisa Kidd is a Reader in Supported Self Management at the University of Glasgow.