Summarised by Centrist
In May, the coalition government halted efforts to update the country’s online safety regulations, despite support from Tech giants.
The Safer Online Services and Media Platforms project purportedly aimed to regulate harmful online content such as child exploitation and the promotion of self-harm.
Researcher Fiona Sing and Professor Antonia Lyons, both from the University of Auckland, call it “a missed opportunity”, but was it more of a dodged bullet?
Sing and Lyons argue submissions from Tech giants like Facebook and X (formerly Twitter) expressed support for the proposed regulations.
While the companies prefer broad principles over strict rules, their willingness to cooperate was an opportunity to protect future generations from online threats, according to the researchers.
However, the government cited difficulties policing illegal content and the subjective nature of “harm” and “emotional wellbeing” in scrapping the work.
Editor’s note: While tech giants’ submissions may have been supportive of an “independent regulator”, public submissions overwhelmingly opposed the proposals.
Concerns cited included worries over online censorship and the importance of free expression, regulations potentially leading to less public discourse and potentially silencing dissenting views.
Submitters reflected Internal Affairs Minister Brooke van Velden’s view that terms like ‘harm’, and ‘misinformation’ are subjective and could be misused and the need to distinguish between ‘harmful’ and illegal content.