The Effectiveness of the UK’s Online Safety Act Comes Under Scrutiny
The Online Safety Act in the United Kingdom, which came into effect in July 2025, has been subject to extensive scrutiny following a recent survey conducted by Internet Matters. This legislation was originally designed to enhance child protection across digital platforms. However, early indications suggest that the Act has not yielded the anticipated improvements in safeguarding young users. Instead, responsibility for managing children’s online safety continues to rest heavily on families, while the enforcement mechanisms appear both intrusive and largely ineffective.
The survey highlighted a mix of outcomes from the initial implementation of the legislation. Approximately half of the children surveyed reported encountering more age-appropriate content online. Moreover, around 40% of both parents and children perceived the online environment to be somewhat safer since the Act’s introduction. Notably, over 90% of children who noticed enhancements in blocking and reporting features viewed these modifications favorably. They appreciated the clearer guidelines, restricted contact with strangers, and limitations on high-risk functionalities that were part of the changes.
Nonetheless, significant flaws have emerged concerning the effectiveness of the age verification systems mandated by the Act. Nearly half of the participating children indicated that these age checks were simple to evade. Alarmingly, an estimated one-third confessed to having circumvented age verification recently. Tactics employed included using false birthdates, sharing login credentials, spoofing facial recognition, and employing VPNs. In one revealing instance, a parent recounted how their 12-year-old son had creatively used an eyebrow pencil to draw a mustache in an attempt to pass facial verification checks. Over half of the surveyed children reported being asked to verify their age on prominent platforms like TikTok, YouTube, and Roblox within a two-month timeframe, using methods ranging from facial age estimation to government-issued IDs and third-party assurance applications.
Despite these measures, substantial protection gaps remain noticeable. In the month following the introduction of child protection codes, almost half of the children admitted to encountering harmful online content. This included exposure to violent, hateful, and body image-related material—all types that should have fallen under the protective umbrella of the Act. Parents expressed rising concerns regarding privacy and the use of data linked to age verification procedures, questioning whether the collected information would be retained or repurposed by either the government or corporate entities. Additionally, the survey failed to address the risks posed by adults impersonating minors to access child-specific online spaces, a danger that parents associate with predatory behavior.
Security professionals are particularly concerned about the current age assurance systems, noting that they represent a troubling imbalance between safety and privacy. The report’s authors concluded that while the Act has made safety features more apparent, it has not accomplished a fundamental transformation in the landscape of online child protection. Harmful content remains alarmingly prevalent, age verification processes are inconsistent and easily bypassed, and emerging issues such as screen time concerns, artificial intelligence (AI) risks, and manipulative design practices continue to go under-regulated.
Experts suggest that organizations implementing age verification should prioritize more privacy-focused and centralized solutions that avoid fragmented data collection across various platforms. These recommendations stem from the need for a more effective and trustworthy framework that upholds children’s safety while safeguarding their privacy.
The challenges illustrated by the survey underscore a pressing need for regulatory bodies, policymakers, and industry stakeholders to assess and recalibrate the mechanisms put in place under the Online Safety Act. As the digital landscape evolves, continuous improvements and reassessments will be crucial to ensure that the protection of children online aligns with both safety and privacy considerations.
In summary, while the Online Safety Act has made strides in various aspects, there remains a significant gap between its intended goals and real-world effectiveness. The safety of children in the online realm requires ongoing attention, collaboration, and an unwavering commitment to developing robust, privacy-conscious measures. The evolution of technology should not overshadow the fundamental rights of children to safe online experiences.

