A recent report from Internet Matters reveals critical insights regarding the Online Safety Act (OSA) in the United Kingdom, indicating that it falls short of delivering the essential protection from online harm that many had anticipated. The report, titled “The Online Safety Act: Are Children Safer Online?”, presents a complex picture of both advancements in online safety and persistent vulnerabilities that threaten the well-being of children navigating the digital landscape.
The OSA has undoubtedly improved the visibility of online safety tools. Numerous social networking sites, online games, and other digital platforms have adopted measures such as age verification checks, reporting tools, and parental controls. According to the findings, approximately 68% of parents and children have acknowledged noticing these changes. Families have largely embraced these initiatives, particularly the children, with 90% expressing a positive view of features like blocking and reporting options.
A significant aspect of the OSA is its intent to curtail harmful interactions online. Features intended to limit contact with strangers and restrict access to certain functionalities, such as livestreaming, have been positively received. However, the report raises alarming statistics that highlight ongoing issues: nearly 49% of children reported having encountered harmful content online, including violent material, racist or homophobic content, and explicit materials. One particularly distressing example recounted during focus groups was children inadvertently confronted by algorithms recommending footage related to the assassination of political commentator Charlie Kirk. Such exposure raises serious concerns about the effectiveness of current safeguards.
Significant weaknesses emerge from the report’s examination of age verification processes. While 53% of children noted being prompted to verify their age online, nearly half, about 46%, believed these checks to be easily bypassed. The report suggests that children readily employ various tactics to circumvent these measures, including inputting false dates of birth, using parents’ identification, applying facial filters, or utilizing virtual private networks. One poignant anecdote highlighted a child who resorted to drawing a moustache with an eyebrow pencil in a bid to appear older. Alarmingly, almost a third of children admitted to attempting to cheat the age verification process independently, with one in four parents confessing to either aiding their children in these efforts or allowing them to do so knowingly.
The findings suggest that parents increasingly feel overwhelmed by the responsibility for their children’s online safety, a burden they believe should also involve digital platforms and regulatory bodies. The report posits that without significantly improved age verification and stricter enforcement mechanisms, children will continue to access inappropriate content and features with alarming ease.
Moreover, the survey uncovers a critical gap in the existing legislation: the issue of excessive screen time driven by addictive platform designs. Families described struggles with algorithm-driven feeds, infinite scrolling, and autoplay functions that encourage prolonged usage. The report reveals that 59% of children admitted to staying up late engaging with their devices, while 45% reported skipping exercise or outdoor play in favor of online activities.
A notable emerging challenge comes from the rise of AI-generated content. Children expressed concern over the increasing prevalence of hyper-realistic pictures and videos appearing on social media, particularly with the fear that generative AI technology could be exploited to create explicit deepfakes involving minors. Although a significant portion of parents—39%—and children—42%—believe that the online environment has become safer in recent times, the report concludes that the OSA has not yet managed to effectuate the substantial changes necessary for significantly enhancing children’s digital well-being.
Ultimately, Internet Matters argues for more robust enforcement measures, the implementation of safety-by-design principles, and the establishment of stricter age assurance systems. The organization also emphasizes the urgent need for more comprehensive regulations addressing emerging technologies, especially those driven by AI, if online protections are to adequately evolve alongside the realities of how children use digital platforms today. As the landscape of digital interaction continues to change, the call for stronger measures becomes imperative to ensure that children can explore the online world safely.

