Australia’s Online Safety Codes & The Battle Against Child Sexual Exploitation – Jenna Manhau Fung

Australia’s Standards and World-first Industry Codes

Australia’s Online Safety Act was commenced on January 23, 2022, providing for the online industry to develop codes to regulate “class 1” and “class 2” illegal and restricted online material, and for Australia’s online safety watchdog, eSafety Commissioner (eSafety) to register the codes upon meeting the statutory requirements.

Since mid-2021, the independent regulator has closely engaged with industry associations and participants for the development of codes. In eSafety’s position paper released in September 2021, they outlined their insistence on an outcomes-based model for the codes, and established 11 policy positions covering substance, design, development and registration, and administration of industry codes for class 1 and class 2 materials.

As suggested in the paper, the industry adopted a two-phase approach: Phase 1 focuses on all class 1A and 1B material including child sexual exploitation material and terrorist material (except online pornography) and phase 2 focuses on all forms of online pornography and other high impact content that falls within class 2.

On April 11, 2022, eSafety issued official notices to six industry associations, constituting steering groups that oversee code development for the eight industry sections outlined under the Online Safety Act, requesting the formulation of phase 1 industry codes.

The first phase of codes was open for public consultation in September 2022 and the draft codes were later submitted to eSafety on November 18, 2022. However, on February 9, 2023, eSafety wrote to the industry associations that the submitted draft codes were unlikely to meet the statutory requirements for registration and requested a resubmission by March 31, 2023, for the registration.

Decision on Industry Code Registration: Two Codes Declined

On May 31, 2023, eSafety conclusively decided against registering two of eight online safety codes submitted by the online industry on March 31, 2023. The industry codes for the other five industry sections concerning Class 1A and 1B materials –– namely Social Media Services Code, Apps Distribution Services Code, Hosting Services Code, Internet Carriage Services Code, and Equipment Code –– were registered on June 16, 2023, and came into effect on December 16, 2023. 

This decision arose from the perceived shortcomings in the proposed industry codes set forth by Relevant Electronic Services (including dating sites, online games, and instant messaging) and Designated Internet Services (including apps, websites, and file and photo storage services like Apple iCloud and Microsoft OneDrive). As a result, eSafety moved forward with the formulation of mandatory industry standards for these two industries as granted by the Online Safety Act 2021.

The eSafety Commissioner expressed that the proposed codes “didn’t go far enough” to sufficiently establish robust community safeguards and implement reasonable measures to prevent their services from becoming conduits for “the worst-of-the-worst online content, often illegal content, including child sexual abuse material (CSAM) and pro-terror content”, said the eSafety Commissioner, Julie Inman Grant.

In 2021, the ACCCE Child Protection Triage Unit received more than 33,000 reports of online child sexual exploitation, each containing images or videos of children being sexually assaulted or exploited for the sexual gratification of online child sex offenders. The AFP brought charges against a total of 237 individuals, accusing them of 2,032 alleged child abuse-related offenses in the same year.

In the initial quarter of 2023, eSafety has noted a staggering 285% increase in reports received in their office of child sexual exploitation and abuse material compared to the same period in 2022. The imperative for action is escalating and a stronger framework is very much needed for more effective community safeguards.

Search Engine Services in the Age of Generative AI

While five industry codes were registered and two declined, eSafety had also reserved their decision on the draft Search Engine Services Code, as the increased functionality of search engines and the recent integration of generative artificial intelligence (AI) may render the draft code obsolete for its intended purpose. For instance, both Microsoft and Google announced their AI-powered search engine in February 2023, prompting the need to rethink those codes.

The initial draft Search Engine Services Code submitted only focuses on online materials that search engines returned after queries, overlooking materials that these services might generate. Consequently, eSafety requires search engines to undertake regular reviews and improvements of their AI tools with measures like delisting and blocking inappropriate materials, to ensure “class 1A” material is not returned in search results. In addition, search engines are obligated to research technologies that would help users detect and identify deepfake images accessible from their services.

As generative AI is further democratized, the potential for malicious use of such powerful tools grows. Instead of playing “whac-a-mole”, eSafety has taken a more proactive stance in addressing these pressing issues. In response to eSafety’s request, services like Google, Bing, DuckDuckGo, and Yahoo will have to take “appropriate steps” to prevent the spread of child exploitation material and reflect them in the revised code.

On August 14, 2023, a new Search Engine Services Code, addressing the risk associated with this new technology, was submitted. The code was later registered on September 12, 2023, and will come into effect on March 12, 2024.

Mandatory Industry Standards

Given the Relevant Electronic Services Code and the Designated Internet Services Code were declined for registration, both industries –– Relevant Electronic Services and Designated Internet Services –– are now slated for mandatory and enforceable industry standards. In alignment with the registered codes, the industry standards adopt the same outcome- and risk-based approach and will operate in conjunction.

The two draft phase 1 industry standards, crafted by eSafety, underwent a 31-day public consultation until December 21, 2023, and it appears to be just a matter of time before eSafety puts a seal on phase 1 development.

However, the proposed mandate of proactive detection and removal of content on cloud and messaging services in the recently released draft standards prompted pushback from international civil society organizations, including Article 19, Digital Rights Watch, Access Now, and the Global Encryption Coalition Steering Committee, with over 600 signatories covering organizations like Signal, Mozilla, Proton, and more.

In the pursuit of safeguarding Australians, client-side scanning was proposed as a method. If passed, the standards will compel the end-to-end encrypted services to engage in active scanning to remove child sexual abuse materials (CSAM) and “pro-terror” content, posting a fundamental challenge to privacy and security.

The final product of industry standards will apply to a wide range of services, including email, instant messaging, personal file storage, and more. While combating CSAM and online extremism is of utmost importance, these endeavors must not jeopardize the security and privacy of Internet users. The eSafety Commissioner should refrain from advocating the incorporation of vulnerabilities and endorsing back doors that undermine end-to-end encryption in the first place.

The Number One Priority of Big Tech Companies

In mid-October 2023, Elon Musk’s social media platform X (formerly known as Twitter) became the first online platform fined under Australia’s Online Safety Act. The company was fined  610,500 Australian dollars by the eSafety Commissioner for failing to meet basic online safety expectations and to share information about its efforts in combating child sexual abuse content.

In February of this year, eSafety issued legal notices to X, Google, TikTok, Twitch, and Discord, seeking clarification on their approaches to tackling child sexual abuse and blackmail attempts on their platforms. Unfortunately, both X and Google did not comply with the notices. While Google dismissed the questions from eSafety with generic responses, X left some of them unanswered. The companies were initially given 35 days to respond to the notices, but multiple extensions were granted, resulting in a long seven-month process.

The eSafety Commissioner revealed many lack mature systems to detect, remove, and prevent child abuse material on their platforms. Notably, X’s automatic detection of such material plummeted from 90% to 75% within three months after Mustk took ownership of the company and cut its workforce by 80%. Surprisingly, eSafety found TikTok above all is the most transparent.

Despite these online platforms proclaiming making child safety their top priority, and notable companies like Google and X endorsing the Five Eyes’ Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse, many of them fail to live up to their promises. This issue is not exclusive to Australia; content moderation mismatch was also observed in Europe.

According to reports submitted to the European Union (EU) in September 2023, X has only 2,294 content moderators to ensure compliance with EU online content rules and address illegal and harmful content on their platforms, as mandated by the recently adopted Digital Services Act (DSA). In contrast, Google’s YouTube boasted 16,974 content moderators, while Google Play and TikTok had 7,319 and 6,125, respectively.

Epilogue: Deepfakes and Pornography

Australia’s Communications Minister Michelle Rowland proposed several amendments to the government’s expectations of online service providers following accusations of inadequate efforts to ensure children’s online safety. The proposed changes aim to pave the way for the implementation of an “appropriate” age assurance mechanism and improvement on the processes preventing children’s access to class 2 material. This shift will necessitate many products to do more than just rely on basic self-report age verification methods, such as using one’s birthday for verification.

A recent eSafety analysis revealed that one in eight of 1,330 child sexual abuse material reports were “self-generated”, with predators coercing kids into filming and photographing themselves performing sexually explicit acts. Implementing a more robust age verification mechanism holds the potential to mitigate the risks associated with such situations for children.

The second phase of industry code development, focusing on all forms of online pornography and class 2 material, has not yet formally commenced, though the Australian government already anticipates tech companies to adopt a more proactive approach in mitigating the issues, particularly in minimizing the generative AI capabilities that produce material such as deepfake pornography, or facilitate unlawful or harmful activities. 

eSafety expects the phase 2 notices to be issued to industry associations requesting them to develop and submit draft codes for registration after the conclusion of the first phase and the determination of the industry standards for relevant electronic services and designated Internet services. In the race against time and the ever-evolving online environment to provide essential guardrails for children, Australia has earned a reputation for its regulatory regime in recent years. Remaining optimistic, let’s anticipate the final products will prioritize privacy and safety, incorporating “reasonable and appropriate” measures as promised.

*This article is originally published in the Asia Pacific Policy Observatory December 2023 Report.