Summarised by Centrist
Australia’s new requirement for social media platforms to block under-16s is drawing scrutiny over privacy, enforcement and scope.
Amended in 2024, the Online Safety Act mandates platforms like TikTok, Instagram and Snapchat to take “reasonable steps” to prevent users under 16 from creating accounts, or face penalties up to A$49.5 million (NZD$54 million).
While the rule targets teens, some warn it could affect all users, as platforms may need to verify everyone’s age to comply.
The law leaves age verification methods to be defined later through “online safety rules” issued by the eSafety Commissioner.
Age assurance technologies being tested include facial scans, behavioural analysis, and biometric estimation, but critics say these methods raise serious concerns about privacy and data retention.
Preliminary results from a 2025 government-backed trial found that age assurance technology is feasible. However, the review warned that no one-size-fits-all solution exists.
The review also warned of risks from over-collection of user data and gaps in protection for indigenous users due to underrepresentation in training datasets.
It remains unclear what “reasonable steps” will ultimately mean, which platforms will be covered, or how easily the rules can be evaded by teens using VPNs or shared accounts.
Read more over at Crikey and Telefonica