Government to strengthen Online Safety Act

19/02/2026 | UK Government

On Monday, 16 February, Prime Minister Kier Starmer announced that the government will act immediately to introduce measures to enhance online safety for children, marking a significant shift in the UK's approach to digital regulation. Speaking to parents and young people, the Prime Minister highlighted the need for the government to act at pace against addictive designs and rapidly evolving technologies.

In its corresponding press release, the government confirmed that it will introduce key amendments to the Children's Wellbeing and Schools Bill (CWSB) to strengthen protections around the collection of children's personal data, along with updates to the Crime and Policing Bill (CPB) to prohibit artificial intelligence (AI) tools from creating harmful or illegal content, including the generation of non-consensual sexual images

The new measures build on existing Online Safety Act 2023 (OSA) provisions that require platforms to prevent children from accessing inappropriate and harmful content by implementing age-assurance and safety measures. 

The new legal powers will allow the government to implement the findings of its consultation on children's wellbeing within months through secondary legislation, bypassing the need for primary legislation as technology evolves. 

The government will also consult on methods to prevent the transmission of nude images of children and investigate restrictions on children's use of VPNs and AI chatbots. 

In an update on Thursday, the Department for Science, Innovation and Technology (DSIT) announced measures to prevent the spread of non-consensual intimate images and ensure their permanent removal from digital platforms. The measures include categorising such imagery as a priority offence under the OSA, which, in terms of seriousness, is on par with child abuse or terrorism.

Technology companies will be required to remove qualifying imagery within 48 hours of notification. This new requirement will be implemented through an amendment to the CPB. Platforms that do not comply face potential fines of up to 10% of their global turnover or have their services blocked in the UK.

The government also intends to implement a system that allows victims to report an image once to trigger its removal across multiple platforms. Under the proposal, flagged images would be automatically deleted if re-uploaded. Ofcom is considering a digital marking system for such content, similar to methods used for terrorism and child sexual abuse material (CSAM), to enable automatic detection and removal.

The government will issue new guidance to internet service providers on blocking access to websites that host prohibited material, a measure aimed at platforms that operate outside the scope of the OSA. These combined actions aim to prevent the spread of intimate images and ensure their permanent removal from the digital environment.

In a statement responding to Monday's announcement, James Baker, Big Tech Programme Manager at Open Rights Group (ORG), warned: "The Government is playing whack-a-mole with online safety, focusing on individual harms and product features instead of confronting the structural power of dominant tech companies. By abandoning comprehensive AI regulation under pressure from Big Tech, it is allowing private actors to shape the rules of digital life without democratic oversight.

Instead, ORG calls for a safety-by-design approach rather than ever-expanding restrictions on specific features or services. As a product engineering concept, safety by design requires anticipating and mitigating risks through technical and process controls. 

In a separate statement, Maya Thomas, Legal & Policy Officer at Big Brother Watch (BBW), said: "The Prime Minister's announcement that the government intends to restrict access to VPNs for under-16s represents a draconian crackdown on the civil liberties of children and adults alike.

"The only way such restrictions could be enforced effectively would be for VPN providers to require all users to undergo age-assurance measures. Having to provide ID or a biometric face scan to access a VPN utterly defeats the point of a technology designed to enhance privacy online."

Meanwhile, an article in the Financial Times (£) questions the government's focus on age-based restrictions under the OSA, highlighting that such measures ignore the broader harms of the digital landscape. The article argues that age restrictions may simply delay vulnerability, leaving 16 to 18-year-olds and older first-time internet users susceptible to fraud and addictive algorithms.

£ - This article requires a subscription. 


Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professionalFind out more.

Read Full Story
Downing Street, Prime Minister

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.