Whether we’re buying a new computer, washing machine or holiday, most of us have probably used websites that allow us to compare prices or read expert and consumer views before we click ‘purchase’. So websites that give the lowdown on interventions to improve child and youth well-being won’t be a completely unfamiliar concept.
Recent years have seen a proliferation of so-called ‘what works registries’ to support decision-makers in selecting interventions to prevent or address issues such as maltreatment, bullying, poor mental health, crime and violence. A new article takes stock of their strengths and limitations. A team of us analysed 24 registries and relevant literature. We also drew on our experience of developing, using and working with registries.
What do registries do well?
At their best, registries help policy makers and commissioners to make more informed decisions about what to implement. They crunch reams of data into accessible formats. This allows registry users to compare the effectiveness of different interventions.
The standards of evidence that underpin registries also pay close attention to evaluation quality. This helps to guard against over-claiming for the effectiveness of an intervention. In turn, standards can help improve the quality and reporting of future evaluations.
Previously, it was near impossible for time-pressed policy-makers and commissioners to discriminate between tested and effective interventions and those with no evidence of impact or even evidence of harm. Some registries go further, appraising how ready an intervention is for wider use, including the availability and cost of materials and training.
While some registries only publish ‘proven’ interventions, the majority use stepped standards. Content can range from fledgling interventions with preliminary evidence of impact to those shown to work in multiple rigorous trials. This can help policy-makers and commissioners assess the strength of what is already being delivered. It also helps developers and evaluators to navigate the next steps with intervention development.
These important benefits should not be underestimated. However, registries also have limitations.
What are limitations of registries?
To start with, many focus primarily on programmes. Yet these make up only a fraction of practice with children and families. Even if there is scope to implement most evidence-based programmes more widely, practice is not full of programme-sized holes.
Registries may stifle innovation because commissioners only want to fund programmes ‘on the list’ or with the highest rating. Perfectly good interventions that incorporate the core features – if not the branding – of effective programmes may be defunded.
Stepped standards also imply that intervention development is linear, culminating in proof of effectiveness in a randomised trial. Yet moving to a higher level does not necessarily signal improvement to the intervention, and nor is improvement contingent on jumping levels.
Next, registries pay less attention than expected to the generalisability of effects. They focus more on ‘how confident can we be that this intervention worked there?’ than on ‘how likely is it to work elsewhere?’.
They also tend to be weak on implementation. Registry users end up very enlightened about evaluation quality and effectiveness but largely in the dark about how easy it was to implement a given intervention and whether practitioners and families liked it.
Then there is the struggle to stay up to date. Ratings quickly become outdated as interventions or standards of evidence change. What was assessed may not resemble what is now available, while older ratings may be generous relative to those based on upgraded standards.
Finally, there are simply too many registries. This is confusing for consumers, especially as ratings against different sets of standards often appear to conflict.
Where next for registries?
There is a lot that registries get right. But there is clearly room for improvement.
Some of this concerns registry content. More information about study samples and the contexts in which an intervention was evaluated would help with assessing how ‘transportable’ an intervention might be. It would also be valuable to hear from people who have provided or used the interventions.
Then there is the issue of registry use. Just because an intervention is ‘Effective’ doesn’t mean that it should be adopted, just as something ‘Unproven’ shouldn’t automatically be avoided or culled. Issues such as context and cost need to be considered. Some decision-makers and service commissioners are likely to need training and support with this.
Next, consolidation would help reduce confusion. We need a moratorium on creating new but largely derivative standards, and a commitment to plug acknowledged gaps. Coordination between registries could make reviewing more efficient and help keep registries up to date.
More radically, there is a case for expanding the range of intervention types and evaluation methods in registries. There are other means to improving outcomes besides scaling tested and effective programmes, and other methods to evaluate effectiveness besides trials.
These suggested changes are neither exhaustive nor easy. There is a tension between oversimplification and information overload. Moreover, registries are part of a broader evidence ecosystem. This means that registry curators aren’t solely responsible for making changes. Funders, government and intermediary organisations all have roles to play.
This blog ends with a call for more research. Ironically, evidence on the impact of ‘what works’ registries on decision-making and investment in interventions for children and families is very limited. Understanding more about who uses them and with what effect is essential.