Blog
What recent evaluations tell us about violence prevention, evidence and learning from null results
At the Youth Endowment Fund (YEF), we spend a lot of time asking a simple but demanding question: what actually helps children stay safe from violence?
To answer it, we fund and commission independent evaluations of violence prevention interventions across England and Wales, many of which can be found in our Toolkit. Not all of them immediately show the positive impacts we might hope for, but regardless of how messy or uncomfortable, they are always useful.
To kick off 2026, we’ve published a new batch of evaluations measuring the impact of violence prevention interventions. They look at mentoring, therapeutic support delivered by youth practitioners, and a classroom-based empathy programme. They are rigorous, independently peer reviewed, and, crucially, not all of them show the positive impacts we might have hoped for.
That’s not a failure. It’s the point.
Post-it notes, microwaves and bubble wrap all came from ideas that didn’t work as planned at first. James Dyson tested 5,271 prototypes before finding a vacuum that worked – and two years later, Dyson was a global success. Learning from what doesn’t work is often how breakthroughs happen.
Null results, or very small effects, aren’t empty results
A ‘null’ result doesn’t mean ‘nothing happened’. It means the programme didn’t clearly outperform what would have happened anyway.
That distinction matters. Violence prevention is complex, and good intentions alone are not enough.
Across these studies, we see children engaging with programmes, forming relationships with adults, and reporting positive experiences. We also see small changes in some outcomes, but with a lot of uncertainty attached. In other words: signals, not slam dunks.
That tells us something important. It tells us that good intentions, and even good experiences, don’t automatically translate into measurable reductions in harm. And if we’re serious about preventing violence, we need to know where that gap comes from.
For practitioners and commissioners, this distinction is critical. Decisions about funding, scaling and redesign depend on understanding not just what feels promising, but what measurably reduces harm.
The role of implementation in evaluation outcomes
One of the clearest lessons from the most recent batch of evaluations is how hard it is to deliver programmes as designed.
Children often received far fewer sessions than planned, as staff were stretched and schools struggled to make time. In many cases, families also didn’t always engage in the way the model assumed. None of this is surprising to practitioners, but it’s rarely captured clearly without rigorous evaluation.
These findings remind us that dosage (how much of the programme children actually receive), feasibility (whether it can realistically be delivered in busy, real-world settings), and fit (how well it aligns with children’s needs and the systems around them) are not ‘nice to have’ details. They are central to whether an intervention has any chance of working. An approach that looks powerful on paper can become something very different once it meets real world constraints.
Why relationships alone don’t reduce violence
Another theme running through these studies is the importance of relationships. Children consistently reported valuing trusted adults, mentors, and practitioners. That’s not trivial. Feeling supported and understood matters.
But the evidence also cautions us against assuming that relationships alone are enough to shift complex outcomes like offending or serious behaviour problems. Relationships may be necessary, but they are rarely sufficient on their own. Understanding what needs to sit alongside them, and for whom, is a key next step for the field.
Why publishing null results strengthens the evidence base
It’s tempting to quietly move on when results are messy. But that would be a mistake.
The What Works Network exists precisely to learn from uncertainty, to pool evidence, test assumptions, and share findings openly, even when they’re uncomfortable. YEF is in a rare position to do this at scale, across multiple interventions, populations, and settings.
By publishing null results, small effects, and mixed findings, we help the sector learn from evaluation. We sharpen future programme design. And we create space for more honest conversations about what it really takes to make a difference.
Evidence-based violence prevention is about progress, not perfection
If every evaluation showed large, clean impacts, we should be worried, rather than relieved. Social problems are complex. These programmes operate in crowded systems and aim to change children’s lives. If something isn’t working, or we can’t be confident that it is, we need to know that.
Evidence isn’t about chasing perfect results. It’s about reducing guesswork, and sometimes, the most valuable finding is the one that makes us stop, rethink, and try again, better informed than before.
That commitment to transparency and learning sits at the heart of YEF’s approach to evidence and funding. That’s not failure, that’s progress.
Related content
- Page
Page:Race Equity in YEF’s research and evaluation
About Race Equity in Research As a research organisation, we have a duty to make sure that we fund projects that build evidence on how we can reduce racial disproportionality across public services. You can find out more about our race equity commitments and review the first published report on our progress. We also need… - Page
Page:Our approach to evidence
Right now, we just don’t know enough about the best ways to protect children from becoming involved in violence in the long term. Evidence in the UK is limited and hard to find. On top of this, far too little is known and understood about the difficulties faced by many young people or how to…