The Toolkit provides three ratings:
1. An impact rating
This describes whether, on average, the approach has had a low, medium, high or harmful impact.
2. An evidence rating
The describes the confidence that we have in the research used to calculate the impact rating.
3. A cost rating
This gives a general indication of the cost of the approach, relative to other approaches in the Toolkit.
Each of the ratings is described in more detail below. A full, technical description can be found in the Toolkit Technical Guide.
Impact
The impact rating indicates the likely average impact of the approach on keeping children safe from involvement in violence. For example, the section on CBT shows that, on average, CBT is likely to have led to relatively large reductions in violence.
The rating is based on the average effect size reported in a meta-analysis. A meta-analysis is a type of study which aims to find as many studies as possible on a particular approach, and then calculate the average impact.
It’s important to remember that this rating refers to the average impact. The average is useful – it helps us to distinguish the approaches that are most likely to work. But there is still variation around this average. The detailed summaries describe what we know about what causes this variation, because it might help to design more effective activities. For example, if you click through to detailed summary on CBT, it tells you how programmes with more regular sessions have tended to have larger impacts.
The Toolkit tries to estimate the average impact on violent crime but it is rare that studies directly measure the impact on violence. Research has tended to focus on more general measures of crime or things which predict violence. If we do not have a measure of the impact on violent crime, we use information about the impact on related outcomes (for example, reducing aggression or improving family relationships) to estimate the impact on violence.
More detail on the impact rating is available in the Toolkit Technical Guide.
Evidence security
The evidence quality rating describes the confidence that we have in our impact rating. The possible evidence ratings range from one magnifying glass (very low confidence) to five magnifying glasses (very high confidence).
The evidence rating is based on four criteria.
- The quality of the systematic review that the impact rating is based on.
- The number of studies in that systematic review.
- The consistency of effect sizes from the primary studies used by the meta-analysis to calculate an effect size.
- Whether the impact estimate is based on a direct measure of crime or violence, or an indirect estimate based on an intermediate outcome such as bullying perpetration.
What does each rating level mean?
Very low confidence: We did not find a systematic review or the available systematic reviews did not include suitable studies.
Low confidence: A systematic review exists but has limitations or did not directly measure the impact on crime or violence.
Moderate confidence: The research is relatively well-established but there are some limitations in the methods used, the number of studies, or the consistency of findings.
High confidence: The evidence for these topics is relatively strong compared to others in the Toolkit. However, there are some minor limitations which prevent the topic receiving the highest rating.
Very high confidence: To receive this rating, topics need to be based on a high-quality systematic review that includes a large number of studies with very consistent findings.
For more detail on how the evidence security rating is allocated see the Toolkit Technical Guide.
Cost
The Toolkit cost rating aims to provide an initial indication of the likely cost of implementing the approach. The rating places the cost of the approach in one of three bands:
Estimated average cost per participant (£):
£0 – £500
Estimated average cost per participant (£):
£500 – £1,500
Estimated average cost per participant (£):
£1,500+