The Online Safety Act and children’s code are a response to the huge changes brought forth by the rapid adoption of unaccountable digital products fuelled by advertising © Gary Perkin/Dreamstime

The writer is a crossbench peer and expert on digital regulation

This week Ofcom published its draft Children’s Safety Code of Practice, a central plank of the recently passed Online Safety Act. For many, it was a long-awaited victory in the fight for child online safety. But for some it was a futile attempt to rein in an arrogant and unaccountable tech sector and for others, too little too late.

The 19th century saw not one but more than 10 Factory Acts, each tackling aspects of health and safety for workers in the new industries, as well as legislation that brought in street lights, sewers, food safety, even the weekend. The impact of industrialisation required seismic shifts, not only to protect those with their hands on the looms, but also those that lived in the cities that came in its wake. Similarly, the Online Safety Act and children’s code are a response to the huge changes brought forth by the rapid adoption of unaccountable digital products fuelled by advertising.

The very fact that companies hosting user-generated content likely to be accessed by children will be required to know they are dealing with a child is a victory. For two decades at least, the tech sector has been wilfully ignoring children. If you knew for sure your customer was 13, it begged the question of why you sent them hardcore porn, or a message designed to encourage self-harm — it was easier to turn a blind eye to their age. So, while some suggest that identifying children is the end of the internet as they know it, a growing consensus not only in the UK but across the world think a know-your-customer approach is standard stuff.  

Just as popular is preventing children from accessing pornography, which is increasingly normalising the sexualisation of early childhood. Introducing age-gating is long overdue. A whopping 27 per cent of children have seen pornography by the time they are 11, largely unasked for. And recently, the Internet Watch Foundation, which identifies child sexual abuse content, uncovered a new low — over 100,000 web pages featuring self-generated child sexual abuse of children under 10. Pornography has also contributed to the normalisation of rape culture.

The research behind the children’s code is quite mesmerising. It found 20 per cent of children aged 8-12 had signed up to an app as an 18-year-old. Young people report multiple and regular encounters with content that promotes suicide, self-harm and eating disorders. Children exposed to graphic violence don’t bother to report it because they don’t believe services will do anything about it and don’t trust them to keep the report confidential.  

The code is weak on design features, however. While the research shows livestreaming and direct messaging are high risk, there are few mandatory mitigations included to tackle them. Similarly, the requirement for measures to have an existing evidence base fails to incentivise new approaches to safety, including those revealed by whistleblowers. Frances Haugen and Arturo Béjar both walked out of their jobs at Meta with internal research about safer ways of designing its services that Mark Zuckerberg had rejected. How can you provide evidence that something does not work if you don’t try it?

The children’s code could be a great deal more ambitious, and must tackle all the risks it identifies. Its emphasis is on asking kids to choose protective tools rather than safety by design and default. This may well serve tech companies since it is estimated that over 90 per cent of people never change the default settings. And then there is the glacial speed at which the code is travelling and the 1,300 pages that make it impenetrable to any parent, teacher or child. The measures are in practice only understood by the regulator and regulated — this is unhealthy.

Many of these problems do not lie at Ofcom’s door. The government tied the legislation up in knots, granting exceptions and restricting powers. One argument is that starting narrow and then adding to the code is the way to protect against unintended consequences. There are two problems with that.  

An increasing number of parents are calling for a straightforward smartphone ban. They are — legitimately — furious and frustrated that they (and their children) are no match for the power of the billions poured into persuasive design. Parents want something to happen now, before their children miss their entire childhood waiting for just one more text, post, emoji, comment or like. And then there is the looming spectre of artificial intelligence, which will supercharge every harm that children already experience.  

As we celebrate the arrival of the draft code, we should already be demanding that the holes in it are fixed, the exceptions readdressed, the lobbyists contained. We must move towards faster legislation based on existing principles of how we treat children and, of course, we should have an AI act. This is a new industrial revolution; we know from the last one what is needed.


Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments