New Powers for Online Safety: An Overview of the Australian Regulatory Modernisation – Jenna Manhau Fung

Background

The Internet, with its vast array of benefits, has revolutionised the way we interact with the world. From how we learn and work, to how we form connections with people. As the use of the Internet is increasingly woven into the fabric of modern society, online safety has also become a more intricate and multi-faceted issue that requires our attention. In 2015, research revealed that 60% of Australians aged 18 to 54 experienced digital harassment or abuse, with individuals between the ages of 18 and 24 or gender-diverse adults being more susceptible. The study also found that 10% of respondents reported the non-consensual distribution or posting of their nude or semi-nude images online. 

Anyone of any age across most online platforms can experience harm or be exposed to abusive content. To keep pace with the advancement of technology and the threats derived, the Online Safety Act 2021 (the Act) was passed in February 2021 in Australia to expand the existing laws for the protection against abusive behaviour and harmful content online. The new legislation also provided new powers and resources to the eSafety Commissioner (eSafety) to fight against online harm at the international forefront. 

In 2022, eSafety reported an alarming 65% increase in cyberbullying and a 55% rise in image-based abuse in Australia in 2021 alone. The significant surge in cases of digital harassment and abuse underscores the pressing need for immediate action. While most existing research focused primarily on children and teenagers, little attention has been paid to digital harassment and abuse, including cyberstalking and other unwanted online behaviours conducted of sexual nature, experienced by adults. Two years since the law’s enactment, how effective has it been in curbing online bullying and making the online world safer for Australians? What valuable insights can we glean from this experience?

The Online Safety Act 2021

Prior to the promulgation of the Online Safety Act, the Department of Infrastructure, Transport, Regional Development and Communications and the Arts (DITRDCA) drafted the Online Safety (Basic Online Safety Expectations) Determination 2021 that set the basis for the latest Online Safety Act, including an Adult Cyber Abus Scheme that was first introduced by DITRDCA (formerly Department of Communications and Arts) in 2019. Despite counterarguments during the 2020 consultation on Online Safety Reforms and criticism levelled on whether Australia is taking the right approach for these issues at all, the Act was ultimately passed and commenced on 23 January 2022, replacing a patchwork of online safety legislation with a more consistent and clearer regulatory framework.

The creation of the Act sets out minimum standards and clear expectations for online service providers, including social media platforms, search engines, and online forums, making them more accountable for the online safety of those who use their services. Specifically, the Act requires the industry to develop new codes to regulate illegal and restricted content, such as videos showing high-impact violence, nudity, or acts of terrorism, as well as a new complaints and objections system that allows individuals to report harmful online content and seek its removal.

The Act gives eSafety Commissioner substantial new powers to take necessary actions to address adult cyber abuse, providing a safety net for all Australians. Following the law’s enactment, eSafety published a Regulatory Guidance on Adult Cyber Abuse Scheme in 2022. In cases where a complaint has been made to an online service provider but the adult cyber abusive material of concern has not been removed, eSafety could issue a removal notice to them or directly to the end-users who posted those materials. If platforms fail to take down the abusive content within 24 hours of receiving the notice,  eSafety can take legal action and impose up to AUD 555,000 fines on online service providers.

Adult Cyber Abuse and the Safety Net

Essentially, adult cyber abuse refers to the use of online communication to intentionally inflict harm on the mental or physical health of someone who is 18 years or older. This can manifest in various ways, such as sharing offensive images, videos, comments, emails, or messages via social media or other internet-based services. In most cases, adult cyber abuse involves hate speech related to issues like sexism, racism, homophobia, transphobia, or Islamophobia.

Adult cyber abuse can include content that is excessively hurtful, especially when it repeatedly targets the same person. However, It is important to remember that the Act is not for regulating hurt feelings or pure reputational damage. As per the Online Safety Act 2021, adult cyber abuse encompasses material that targets an Australian adult, with (1) the intention of causing serious harm and (2) being menacing, harassing or offensive in all circumstances. The term “adult cyber abuse” is reserved for the most severely abusive material that sets out realistic threats or places people in real danger, so materials like bad online reviews or strong opinions that do not meet both the above criteria would not be considered as adult cyber abuse under the Act.

Despite common assumptions that women are more vulnerable to such harmful behaviours, both women and men may experience sexual, gender, and sexuality-based digital harassment and abuse. In 2022, the eSafety Commission reported that male victims accounted for almost 60% of more than 4,000 reports to the Commission. While a majority of those who fall victim to “sextortion” (where intimate images and videos are being used against them online) are male, “revenge porn” continues to disproportionately impact women.

Modernisation of Regulatory Framework

With online safety being increasingly complex, the Australian government is doubling down its effort in modernising the regulatory framework on digital platforms, and have tasked different departments to handle a specific aspect of issues on digital platform services, including individual harms, collective harms, privacy, consumer protection, competition, and copyright.

To better coordinate works handled by different departments for the oversight of digital platforms, the Digital Platform Regulators Forum was established in 2022 with ​​strategic priorities focusing on the impact of algorithms, seeking to increase the transparency of digital platforms’ activities and protect users from potential harm.

The Forum comprises members from various regulatory bodies, including the Australian Competition and Consumer Commission (ACCC), Australian Communications and Media Authority (ACMA), eSafety Commissioner (eSafety), and Office of the Australian Information Commissioner (OAIC), aiming to increase collaboration and capacity building between the four members to tackle issues across their primary lines of responsibility.

Individual and Collective Harms

While eSafety focuses on protecting individuals from online abusive behaviour, ACMA is responsible for the content that targets a group of people. This may include misinformation or disinformation against a group of people or sexually abusive materials that have been shared online for public viewing. 

In May 2022, ACMA and eSafety jointly published a Child Safety Policy that provides clear and practical guidelines for their workers when handling concerns, feedback, or complaints related to child safety. By adopting a risk-based approach aimed at minimising the likelihood of abuse or harm occurring, this policy enables the ACMA and eSafety to meet community expectations, and obligations under the Online Safety Act, as well as several other legislative frameworks and national principles.

Privacy and Data Protection

If such online abusive behaviour pertains to violations of data practices, such as doxxing or sharing others’ data without consent, OAIC would step in to safeguard privacy and personal data protection, as well as the safety of data handling. The OAIC also offers help to persons under the Australian jurisdiction if they have been harassed, bullied, or defamed. This includes support and resources when one is concerned about personal information being mishandled, data breaches, or ​​serious harassment.

Competition and Consumer Protection

On top of that, antitrust and consumer protection fall under the responsibility of ACCC. In December 2017, ACCC was directed by the Treasurer at the time, Scott Morrison, to conduct a Digital Platform Inquiry that aims to examine the impacts of digital platforms, including digital search engines and social media, on the competition of media and advertising service markets. Particularly on how they influence the supply of news, journalistic content, and the resulting implications for content creators, advertisers, and consumers.

Following the public consultations and research, ACCC published its Final Report in July 2019, and following so, the Australian government directed ACCC in February 2020 to conduct a 5-year Digital Platform Services Inquiry into markets for the supply of digital platform services and their digital advertising services, as well as the data practices of both digital platform service providers and data brokers.

As ACCC is working toward the Final Report in March 2025, the ACCC will provide interim reports at a six-month interval starting from September 2020, with each focusing on a specific issue and covering practices of suppliers in digital platform services markets that may result in consumer harm. For example, in the September 2022 Interim Report, ACCC compared the European and British approaches to safeguarding online safety.

Will Self-regulatory Industry Codes Protect Consumers?

In September 2011, ACMA released an Occasional Papers to administer self- and co-regulatory arrangements in broadcasting, telecommunications, Internet, and radiocommunications sectors, which places responsibility for regulatory oversight within each communication and media sector with clear legislative obligations underpinned.

Nearly a decade later, a 2020 eSafety Commissioner research revealed that 75% of Australian adults believe tech companies have the responsibility for people’s online safety, while only 23% of the respondents agree tech companies are providing services with sufficient safety features by design. Privacy was ranked as the top issue to be addressed by 73% of respondents, while 67% agreed on the need for scanning user content to detect illegal harmful content for removal. More than half of the respondents expressed the need for better age restrictions on content and automatic flagging of users’ inappropriate language and behaviour.

As users demand more protection and regulators implement new rules under the Online Safety Act, the industry was called on to set up its own codes and mechanism to address those issues. It’s worth taking a moment to have a look at how the industry is responding to all emerging issues and regulations altogether, and most importantly, to evaluate the potential intended consequences of these measures.

After the Online Safety Act came into force, the industry was instructed to develop industry codes and specify how they would put protections in place against access and distribution of harmful content online. These materials will be classified by eSafety as either “Class 1” or “Class 2”,  which are adapted from the National Classification Scheme used for rating films and computer games. Generally speaking, Class 1 is material that would be refused, while Class 2 might be classified as X18+ or R18+.

However, critics argue that the classification scheme enacted in 1995 is outdated and may not be suitable for today’s needs. Rating movies for cinemas and scaling content for everyone online are two completely different things by nature. In addition, a self-regulatory approach adopted in the industry codes may not only provide relatively weak protection, especially for children. Eventually, content that is not aligned with public attitudes may still be available online somehow, when it should be taken down.

The Fine Line in Between

The digital environment has made sending and receiving intimate images and videos easier. An Australian Institute of Criminology study found that 50% of Australians aged 16 to 18 had sent a sexual picture or video of themselves, while 70% had received one. Unsurprisingly, the Australian Institute of Family Studies also found comparable figures in adults aged 18 to 30.

Sexting in the digital age, especially among young persons, is not uncommon, but the content is not always self-generated and the sharing of materials may not always be consensual. A survey by RMIT University and Monash University found that 20% of Australians have had a sexual image shared without their consent, whereas Internet Watch Foundation revealed that 88% of self-generated nude have been shared or uploaded to other sites.

As teenagers transition into young adults, they face a “dual vulnerability”. On one side, being exposed to sexually explicit materials, while also potentially contributing to the creation of such content. This makes them fall into the most dangerous zone where they may face unintended consequences, such as child pornography charges. However, the motivations behind sexting could lie in two extremes, from consensual exchanges with a partner to extortion, intimidation, or other forms of abuse. The complex nature of consent is what further complicates this issue.

While sexting is just one of the millions of issues we are dealing with, there must be measures in place to protect vulnerable youth. For instance, measures like age verification by governments can serve as a gatekeeper to prevent minors from accessing pornography websites, as well as to pressure the industry to bear more responsibilities for protecting underage individuals.

In fact, Australia is expected to introduce legislation on a new digital identity scheme by the end of 2023. This scheme will make it easier for e-commerce platforms, especially those related to online gambling and alcohol sales, to verify customer identities without collecting excess personal information. While the Australian government has been considering extending similar measures with background checks and ID verification systems for dating apps to protect people who look for love online, we shouldn’t underestimate their potential downsides, such as concerns over privacy, user safety, and data protection.

While no single solution can solve all problems overnight, it is crucial for governments to take the lead in introducing legislation for the implementation of measures related to age verification or background check in cyberspace, in order to create a more secure online environment for everyone.

The industry undoubtedly has an inescapable responsibility to protect users’ online safety, but let’s not forget the importance of including consumers’ opinions, which the industry-driven approach is currently neglecting. One way to achieve this is through an open public consultation, although this may not be the most effective method. A more productive approach could be to work in closer collaboration with civil society and academic institutions to ensure that public interests are represented in policy development. In essence, bringing together stakeholders from different sectors and actively listening to their views and opinions is the key to building a more constructive and inclusive dialogue. Ultimately, leading to more effective policy development that benefits everyone.

Epilogue

Despite controversies over the self-regulatory industry codes that will be operated under Australia’s Online Safety Act, industry associations submitted draft industry codes in November 2022 in accordance with Part 9, Division 7 of the Online Safety Act. However, the eSafety Commissioner does not think the draft was sufficient in protecting users of their services in Australia.

On March 3, 2023, the eSafety Commissioner further extended the deadline until the end of the month for the online industry to resubmit draft industry codes that provide appropriate community safeguards for users in Australia. Following the deadline, eSafety Commissioner Julie Inman-Grant will assess each draft code and make her final decision. In cases where the industry failed to submit appropriate codes on time, the eSafety Commissioner has the power under the Act to determine industry standards, where consultation will be circled by the industry and the public again.

In the meantime, the Singapore government was empowered by the Online Safety (Miscellaneous Amendments) Act 2023 to issue directives to social media platforms to block local access to “egregious” content and impose penalties if businesses fail to comply. Unlike the Australian approach, regulated online communication services are governed by an extra set of Online Codes of Practice established by the regulator Infocomm Media Development Authority (IMDA), with requirements on establishing internal systems to restrict access to content that has a risk of causing significant harm to children and other vulnerable users, as well as regular auditing processes to demonstrate compliance and regular reporting to IMDA. 

The approach taken by Australia and Singapore fell on different ends of the spectrum, however, both countries have taken proactive measures to address these issues by enacting legislation that puts the onus on digital platforms to be more accountable for the content on the services they provide. This highlights the necessity for governments to take a more active role in regulating online content, but it also reflects the growing need for a multi-stakeholder approach to consider public interests while creating a shared objective of safeguarding online safety. These developments are only part of the broader trend in regulating online content worldwide, such as the United Kingdom’s Online Safety Bill, the European Union’s Digital Services Act, and Canada’s upcoming Online Streaming Act (Bill C-11) and Online News Act (Bill C-18). The actions of Australia and Singapore may serve as models for other countries in the Asia-Pacific region grappling with how to regulate the digital world, and look for ways to balance freedom of expression with protecting users from harmful online content.


This article is part of the inaugural report of the Asia Pacific Policy Observatory. The April 2023 edition focuses on rights, privacy, and freedom in cyberspace in the digital age. Read full report here.