Consultation sets out more than 40 practical steps to keep children safer, in line with new Online Safety Act, but says tech firms can and must do more…
The Online Safety Act, which passed into law last October, imposes strict duties on online services that can be accessed by children to ensure they are safe. This includes everything from popular apps and search engines to social media sites. The tech firms behind these services are required by law to assess the risks posed to children and then to implement measures to mitigate those risks. But what does that mean in practice?
To answer that question, regulator Ofcom has today published draft codes of practice on children’s safety online. These set out more than 40 practical steps that online services are expected to take – and go further than current industry standards.
The codes include measures to prevent children from ever encountering the most harmful online content, such as that relating to eating disorders, pornography, self-harm and suicide. Exposure to other serious harms should be minimised, including abusive, hateful or violent material, online bullying and content that promotes participation in dangerous challenges.
Crucially, the codes underline that to meet the legal requirements to protect children online, platforms’ fundamental design and operating choices must be safer.
In publishing the codes, Ofcom reiterates the problem we all face. One study published last month reveals that 62% of children aged 13-17 reported encountering this kind of online harm in just a four-week period. Further research suggests that children first encounter violent content online while at primary school, and those who encounter content promoting self-harm or suicide say it is ‘prolific’ on social media; frequent exposure also contributes to wider normalisation and desensitisation.
Ofcom is inviting responses to these draft codes of practice by July 17. Subject to responses received, the regulator expects to publish final versions of the codes of practice within the next 12 months. Online services will then have three months in which to conduct the required risk assessments based on this guidance. One approved by Parliament, the codes of practice will come into effect – and companies that fail to meet their legal duties will face enforcement and sizeable fines.
A further consultation will be launched later this year on the way automated tools such as AI might be used proactively to detect illehal and harmful content.
Dame Melanie Dawes, Chief Executive of Ofcom, says: ‘We want children to enjoy life online. But for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.
‘In line with new online safety laws, our proposed Codes firmly place the responsibility for keeping children safer on tech firms. They will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age-checks so children get an experience that’s right for their age.
‘Our measures – which go way beyond current industry standards – will deliver a step-change in online safety for children in the UK. Once they are in force we won’t hesitate to use our full range of enforcement powers to hold platforms to account. That’s a promise we make to children and parents today.’
In related news:
Opinion: Navigating the BT PSTN Switch-Off in critical calling for healthcare
Local government study shows most staff and councillors ‘unengaged’ in digital
Two-thirds of us want faster, simpler and fully digitised public services
Leave a Reply