Critical Gaps in Age Verification Systems Under the Online Safety Act
The implementation of the Online Safety Act in July 2025 was intended to bolster protections for children navigating the digital landscape. While the Act aimed at enforcing stricter age verification processes, limiting exposure to harmful content, and enhancing reporting mechanisms, initial assessments reveal alarming shortcomings. Children have been able to cleverly circumvent these safeguards with simple tactics, such as drawing fake facial hair to pass as older on camera.
Despite some progress in child safety online, the enforcement of these age verification systems has proven inconsistent and distressingly easy to bypass. Reports indicate a range of evasion techniques employed by children, from entering fictitious birthdates and borrowing credentials from adults to utilizing more sophisticated methods, like spoofing facial recognition systems. A significant survey highlights that nearly half of children believe age verification systems can be easily circumvented, with approximately one-third admitting to having done so recently.
A striking example from the report illustrates the creativity of children in exploiting these weaknesses. A parent recounted how their 12-year-old son successfully deceived an age verification system by using an eyebrow pencil to create a makeshift moustache, convincing the system that he was of an appropriate age. Such incidents underscore the vulnerabilities inherent in facial age estimation technologies, which often rely on superficial visual cues rather than robust identity validation mechanisms.
Although VPN use and account sharing were mentioned in the report, they were reported less frequently, suggesting that even basic methods of circumvention remain effective against the controls currently in place. Despite these vulnerabilities, the report does highlight some moderate improvements in the online experience for children. Nearly half of the surveyed children noted that they had encountered more age-appropriate content on various platforms, and about 40% of both children and parents indicated that the internet felt somewhat safer since the enactment of the Online Safety Act.
Notably, children showed support for safety features like stricter regulations, reduced contact with strangers, and limitations on high-risk functionalities on various platforms. Approximately 90% of those surveyed who noticed improvements in moderation and reporting tools viewed these changes positively, reflecting a desire among younger users for safer online environments.
However, the benefits are not universal. Within just one month of the new child protection codes coming into effect, nearly half of the children surveyed reported still encountering harmful content, which includes violent material, hate speech, and body image-related topics that the Act aims to regulate. This lack of comprehensive protection raises significant concerns over the legislation’s overall effectiveness.
The expanded requirements for age verification have also given rise to new privacy apprehensions. Over half of the children reported being asked to verify their age within a recent two-month timeframe, often on major platforms such as TikTok, YouTube, Google services, and Roblox. These age verification systems increasingly depend on technologies such as facial age estimation, government-issued ID checks, and third-party age assurance providers. While these methods are generally user-friendly, they provoke serious questions about data collection, storage, and potential misuse.
Parents have voiced their concerns regarding the handling of sensitive biometric or identity data during the verification process, worrying that such information might be retained or repurposed by companies or governmental bodies. Consequently, there are increasing calls for centralized, privacy-preserving age verification systems, rather than the current fragmented approaches scattered across various platforms.
The report ultimately concludes that while the Online Safety Act has begun to reshape children’s digital environments, a substantial transformation has yet to be realized. Harmful content remains pervasive, and age assurance systems still grapple with both effectiveness and public trust. Moreover, critical issues such as excessive screen time, AI-driven risks, and manipulative platform designs remain only partially addressed, raising concerns that the needs of children are not being fully met.
Experts caution that unless these gaps are bridged, current systems may fall short of achieving a balanced solution that prioritizes child safety while respecting user privacy. The findings emphasize an urgent challenge for regulators and technology providers: to develop age verification systems that are not only secure and effective but also resilient against simple evasion tactics—like something as innocuous as a drawn-on moustache—ensuring the safety and integrity of children online.

