LAST Thursday evening, Channel 4 broadcast Molly vs The Machines, a documentary about 14-year-old Molly Russell from Harrow, who died by suicide in November 2017 after months of exposure to content about depression, self-harm and suicide on Instagram and Pinterest. The same day, Amnesty International issued a press release calling for ‘fundamental redesign’ of social media platforms and robust government legislation to protect children online.
Nobody disputes that Molly’s story is devastating. Nobody disputes that the platforms behaved appallingly. The Prevention of Future Deaths report issued by Senior Coroner Andrew Walker concluded that Molly ‘died from an act of self-harm whilst suffering from depression and the negative effects of on-line content’, widely reported as the first time a UK inquest had directly implicated social media content as a contributory factor in a child’s death. Meta’s representative described some of the harmful content Molly had viewed as ‘safe’. Pinterest at least had the decency to apologise.
But there is a conversation this week that nobody is having. And it is the most important one.
Read Amnesty International’s statement carefully. It calls for government legislation, platform redesign, algorithmic accountability and robust enforcement. These may or may not be achievable. What is striking is what the prescription does not contain. Amnesty mentions Molly’s family in passing, but parents and family stability are not treated as a policy lever anywhere in the statement. The focus is overwhelmingly on platforms, algorithms and governments.
The government’s new consultation ‘Growing up in the Online World,’ launched on March 2, does somewhat better. It includes a chapter on supporting parents and carers, and acknowledges that parents are ‘grappling’ with screen-time questions. But the centre of gravity remains regulation and platform duties: bans, curfews, age verification, VPN restrictions and legislative amendments. The assumption running through the bulk of its proposals is that the state and the platforms between them can fix this, if only we get the levers right.
The research says otherwise.
What the evidence actually shows
NHS Digital’s survey of the mental health of young people in England found that 6.2 per cent of children aged five to ten living with married parents had a diagnosable mental disorder. Among children of lone parents, the figure was 17 per cent. These are unadjusted figures for England, but the direction of the association holds across multiple studies and methodologies. No algorithm explains most of that gap.
A meta-analysis drawing on 54 studies and more than 500,000 participants found that parental divorce was associated with a 48 per cent increase in the odds of suicidal ideas in offspring. A 2026 Korean study of more than 60,000 young people confirmed that adolescents who experienced a change in family structure had significantly poorer mental health across every measured outcome compared with those living with two biological parents.
The Centre for Social Justice’s February 2026 report ‘I Do?’ found that cohabiting couples are three times more likely to separate than married couples, that by a child’s fifth birthday 28 per cent of cohabiting couples had split compared with 10 per cent of married couples, and that children experiencing family breakdown before age 19 are over twice as likely to experience homelessness, twice as likely to be in trouble with the police, and almost twice as likely to experience alcoholism.
These are not peripheral findings. They are among the most robust signals in the data.
The bridge nobody is building
Here is the connection the online safety debate is missing entirely. Research drawing on attachment theory argues that adolescents experiencing parental emotional neglect are more likely to develop problematic social media use, which in turn worsens their psychological distress. Teenagers who lack emotional warmth and validation at home turn to social media to fill the gap. The platforms are designed, deliberately and profitably, to exploit exactly that vulnerability.
For many children, the algorithm is an accelerant, not the first spark. The wound was opened elsewhere.
This does not mean platform accountability is irrelevant. The inquest evidence about Instagram and Pinterest is damning, and the Online Safety Act 2023 represents a necessary attempt to impose some discipline on an industry that has shown none. But regulation alone cannot close the wound. Only a present, warm, engaged parent can begin to do that.
The research on parental mediation is clear. The EU Kids Online survey of more than 25,000 children across 25 countries found that active parental mediation — talking with children about online content, sharing their online experiences and providing guidance — reduces harm without reducing children’s digital opportunities or skills. A 2025 systematic review concluded that parental mediation is ‘one of the most effective methods to address the risks faced by young people on the Internet’. A Gallup study found that children whose parents imposed the most stringent screen restrictions spent almost two hours per day less on social media than children whose parents set no limits at all.
Neither the government consultation nor the Amnesty statement treats parental mediation and family stability as central pillars of the solution.
The deeper foundation
There is one further protective factor the research identifies which is even further from the policy conversation: faith community.
Evidence from longitudinal studies suggests that participation in religious and spiritual practices can be protective against depressive symptoms in young people, though findings vary by context and study quality. A separate systematic review found that religious service attendance is associated with protection against suicide attempts, with the mechanisms including access to a supportive community, a source of hope, and ways of interpreting suffering that do not lead toward self-destruction. It is not doctrine alone that does the protective work. It is the concrete social network, the belonging, and the accountability that regular communal participation provides.
A child who is known, loved, and held accountable within a faith community is a child who is harder for an algorithm to reach.
What we should be saying
Before going further, one thing must be said plainly. Ian Russell, Molly’s father, is a courageous man who has given the online safety debate much of its moral force. His campaign has achieved real things. Nothing in this article is a verdict on his family or on Molly’s. The argument here is a population-level one: across millions of children and families, the research consistently identifies family stability, parental presence and community belonging as among the most powerful protective factors we have. No policy framework that ignores these can succeed.
The deepest safeguard against the machines is not a redesigned algorithm or a government consultation. At population level, it is children growing up in stable, committed families, with present and engaged parents who know what their child is watching, who talk to them about it, and who provide the emotional warmth that makes a young person less, not more, vulnerable to whatever the feed sends next. Marriage, the most stable foundation for family life the evidence consistently identifies, is not a peripheral variable in this debate. It is a central one.
Britain is conducting a national conversation about children’s online safety. It is time that conversation included the most important variable of all.
The government’s consultation Growing up in the Online World is open until May 26. We would encourage every reader to respond.










