About the YEF Toolkit
Welcome to the Youth Endowment Fund’s Toolkit, a free, evidence-led resource bringing together research on approaches to violence prevention.
Understanding what works to prevent violence
The YEF Toolkit summarises the best available research on approaches to prevent children and young people’s involvement in violence. It draws on real life data to show what has happened when different approaches have been used in practice.
For each approach, it explains:
- The average impact of the approach
- The quality of evidence behind it
- How much it’s likely to cost
- How to implement the approach and make it as effective as possible
- Where you can find existing high-quality programmes
- Take away messages about what to do with this evidence
- Where to find other evidence-informed resources.
Who is the YEF Toolkit for?
The YEF Toolkit has been developed to support professionals working to prevent violence, including children’s services, policing and youth justice, youth charities, and school leaders. It is designed to help with:
- Planning or commissioning a new service
- Improving an existing programme
- Commissioning research
- Applying for funding
The Toolkit presents research in a way that’s easy to access and easy to understand. It’s there to complement your own expertise and local knowledge, rather than replace it. While it doesn’t offer fixed answers, it does highlight evidence-informed ‘best bets’ and approaches that have been shown to work well in similar contexts.
Why evidence matters in preventing violence
High-quality evidence about how best to support children at risk of violence is hard to find, access, and understand. This can make it difficult for organisations to judge which approaches are most likely to keep children and young people safe.
The YEF Toolkit aims to bridge that gap by making high-quality evidence easier to find, understand, and use. We hope it will support better decisions about how to help children stay safe from violence and crime.
What is the YEF Toolkit based on?
Our Toolkit draws on the best available global research in our Evidence and Gap Map – a database of over 2,200 studies examining the effectiveness of different interventions to prevent violence.
The evidence base is updated regularly as new research becomes available, including findings from YEF-funded evaluations and UK-based studies.
How do we calculate our ratings?
Impact rating
The impact rating indicates the likely average impact of the approach on keeping children safe from involvement in violence.
The rating is based on the average effect size reported in a meta-analysis. A meta-analysis is a type of study which aims to find as many studies as possible on a particular approach and then calculates the average impact.
It’s important to remember that this rating refers to the average impact. The average is useful – it helps us to distinguish the approaches that are most likely to work. But there is still variation around this average, and the summaries and technical reports provide more information about what might cause this variation, such as the length of the programme, or the settings in which it is delivered.
Evidence quality rating
The evidence quality rating describes the confidence that we have in our impact rating. The possible evidence ratings range from one magnifying glass (very low confidence) to five magnifying glasses (very high confidence).
We do this in two ways:
- Where we have used an existing systematic review of the evidence, we use the following criteria to produce an evidence security rating:
- The quality of the systematic review that the impact rating is based on.
- The number of studies in that systematic review.
- The amount of variation in the results from the primary studies used to produce the impact rating.
- Whether the impact estimate is based on a direct measure of violence or crime, or an indirect estimate based on an intermediate outcome such as bullying perpetration.
- Where we have commissioned our own systematic reviews of the evidence we use slightly different criteria, because we already know that the quality of the review is high, and that the review directly measures violence and/or crime. For these reviews, the evidence quality rating is based on:
- The number of studies.
- The quality of the studies included in producing the impact rating.
- If there is a lot of variation in the results, we knock down a rating if we can’t explain what caused the variation in impact.
For more detail on how the evidence security rating is allocated see the Toolkit Technical Guide.
Cost rating
The cost rating aims to provide an average indicate cost of the approach per child, per participation in one programme. This is usually created using an average cost across three programmes delivered recently in the UK.
£ = Low cost: £500 or less, per child per programme.
££ = Moderate cost: £501 – £1,499, per child per programme.
£££ = High cost: £1,500 or more, per child per programme.
Toolkit FAQs
The Toolkit includes studies conducted both in the UK and internationally. However, almost all studies included in the research used in the Toolkit were written in English.
The Toolkit includes a variety of different types of studies.
- Systematic reviews – which summarise the findings across a number of different studies conducted on the same approach.
- Impact evaluations – which measure the impact of an intervention on crime, violence or other outcomes. These studies aim to provide a rigorous estimate of the extent to which the intervention has met its aims.
- Implementation and process evaluations – which use a mix of quantitative and qualitative research methods to explore implementation, understand the experiences of those involved, and examine the reasons why the intervention did or didn’t work. These studies can often provide rich information that can support reflection on maximising the impact of an approach.
YEF worked with Clearview, a research agency, to recruit a diverse group of young people from across the UK to join our Toolkit Youth Panel. The panel was consulted in various stages of the development of the YEF Toolkit, including reviewing descriptions of practice against their experience, sharing views on approaches to violence prevention, and participating in user testing.
Since the initial design, the Toolkit team engages with our Youth Advisory Board on Toolkit content, discussing the evidence, developing our YEF narrative on summaries and take away messages about various approaches.
The first iteration of the Toolkit (2021 to mid-2025) was informed by academics from the University of Cambridge Criminology department teamed up with the Campbell Collaboration, who are internationally renowned for conducting systematic reviews.
The Campbell and Cambridge team conducted reviews of research evidence included in the Toolkit, producing 17 systematic reviews and 34 Toolkit technical reports. The review team included Professor David Farrington, Dr Hannah Gaffney from the University of Cambridge, Howard White from the Campbell Collaboration, and Professor Darrick Joliffe from the University of Greenwich.
In 2024, we partnered with the National Children’s Bureau, the EPPI Centre, and the Race Equality Foundation, to review our methodology, update our Evidence and Gap Map and to produce around 12 updates to the Toolkit each year.
Find out more
You can read more about our approach to developing the Toolkit and the evidence that sits behind it in our Technical Guide. We last updated our technical guide in January 2026.
If you have any questions or suggestions about the Toolkit, please email our Head of Toolkit Laura Knight.